Dec  1 16:28:12 np0005541603 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  1 16:28:12 np0005541603 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  1 16:28:12 np0005541603 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 16:28:12 np0005541603 kernel: BIOS-provided physical RAM map:
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  1 16:28:12 np0005541603 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  1 16:28:12 np0005541603 kernel: NX (Execute Disable) protection: active
Dec  1 16:28:12 np0005541603 kernel: APIC: Static calls initialized
Dec  1 16:28:12 np0005541603 kernel: SMBIOS 2.8 present.
Dec  1 16:28:12 np0005541603 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  1 16:28:12 np0005541603 kernel: Hypervisor detected: KVM
Dec  1 16:28:12 np0005541603 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  1 16:28:12 np0005541603 kernel: kvm-clock: using sched offset of 3107065622 cycles
Dec  1 16:28:12 np0005541603 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  1 16:28:12 np0005541603 kernel: tsc: Detected 2800.000 MHz processor
Dec  1 16:28:12 np0005541603 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  1 16:28:12 np0005541603 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  1 16:28:12 np0005541603 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  1 16:28:12 np0005541603 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  1 16:28:12 np0005541603 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  1 16:28:12 np0005541603 kernel: Using GB pages for direct mapping
Dec  1 16:28:12 np0005541603 kernel: RAMDISK: [mem 0x2e95d000-0x334a6fff]
Dec  1 16:28:12 np0005541603 kernel: ACPI: Early table checksum verification disabled
Dec  1 16:28:12 np0005541603 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  1 16:28:12 np0005541603 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 16:28:12 np0005541603 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 16:28:12 np0005541603 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 16:28:12 np0005541603 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  1 16:28:12 np0005541603 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 16:28:12 np0005541603 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 16:28:12 np0005541603 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  1 16:28:12 np0005541603 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  1 16:28:12 np0005541603 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  1 16:28:12 np0005541603 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  1 16:28:12 np0005541603 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  1 16:28:12 np0005541603 kernel: No NUMA configuration found
Dec  1 16:28:12 np0005541603 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  1 16:28:12 np0005541603 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  1 16:28:12 np0005541603 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  1 16:28:12 np0005541603 kernel: Zone ranges:
Dec  1 16:28:12 np0005541603 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  1 16:28:12 np0005541603 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  1 16:28:12 np0005541603 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 16:28:12 np0005541603 kernel:  Device   empty
Dec  1 16:28:12 np0005541603 kernel: Movable zone start for each node
Dec  1 16:28:12 np0005541603 kernel: Early memory node ranges
Dec  1 16:28:12 np0005541603 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  1 16:28:12 np0005541603 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  1 16:28:12 np0005541603 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 16:28:12 np0005541603 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  1 16:28:12 np0005541603 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  1 16:28:12 np0005541603 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  1 16:28:12 np0005541603 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  1 16:28:12 np0005541603 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  1 16:28:12 np0005541603 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  1 16:28:12 np0005541603 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  1 16:28:12 np0005541603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  1 16:28:12 np0005541603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  1 16:28:12 np0005541603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  1 16:28:12 np0005541603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  1 16:28:12 np0005541603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  1 16:28:12 np0005541603 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  1 16:28:12 np0005541603 kernel: TSC deadline timer available
Dec  1 16:28:12 np0005541603 kernel: CPU topo: Max. logical packages:   8
Dec  1 16:28:12 np0005541603 kernel: CPU topo: Max. logical dies:       8
Dec  1 16:28:12 np0005541603 kernel: CPU topo: Max. dies per package:   1
Dec  1 16:28:12 np0005541603 kernel: CPU topo: Max. threads per core:   1
Dec  1 16:28:12 np0005541603 kernel: CPU topo: Num. cores per package:     1
Dec  1 16:28:12 np0005541603 kernel: CPU topo: Num. threads per package:   1
Dec  1 16:28:12 np0005541603 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  1 16:28:12 np0005541603 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  1 16:28:12 np0005541603 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  1 16:28:12 np0005541603 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  1 16:28:12 np0005541603 kernel: Booting paravirtualized kernel on KVM
Dec  1 16:28:12 np0005541603 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  1 16:28:12 np0005541603 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  1 16:28:12 np0005541603 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  1 16:28:12 np0005541603 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  1 16:28:12 np0005541603 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 16:28:12 np0005541603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  1 16:28:12 np0005541603 kernel: random: crng init done
Dec  1 16:28:12 np0005541603 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: Fallback order for Node 0: 0 
Dec  1 16:28:12 np0005541603 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  1 16:28:12 np0005541603 kernel: Policy zone: Normal
Dec  1 16:28:12 np0005541603 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  1 16:28:12 np0005541603 kernel: software IO TLB: area num 8.
Dec  1 16:28:12 np0005541603 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  1 16:28:12 np0005541603 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  1 16:28:12 np0005541603 kernel: ftrace: allocated 193 pages with 3 groups
Dec  1 16:28:12 np0005541603 kernel: Dynamic Preempt: voluntary
Dec  1 16:28:12 np0005541603 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  1 16:28:12 np0005541603 kernel: rcu: #011RCU event tracing is enabled.
Dec  1 16:28:12 np0005541603 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  1 16:28:12 np0005541603 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  1 16:28:12 np0005541603 kernel: #011Rude variant of Tasks RCU enabled.
Dec  1 16:28:12 np0005541603 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  1 16:28:12 np0005541603 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  1 16:28:12 np0005541603 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  1 16:28:12 np0005541603 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 16:28:12 np0005541603 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 16:28:12 np0005541603 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 16:28:12 np0005541603 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  1 16:28:12 np0005541603 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  1 16:28:12 np0005541603 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  1 16:28:12 np0005541603 kernel: Console: colour VGA+ 80x25
Dec  1 16:28:12 np0005541603 kernel: printk: console [ttyS0] enabled
Dec  1 16:28:12 np0005541603 kernel: ACPI: Core revision 20230331
Dec  1 16:28:12 np0005541603 kernel: APIC: Switch to symmetric I/O mode setup
Dec  1 16:28:12 np0005541603 kernel: x2apic enabled
Dec  1 16:28:12 np0005541603 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  1 16:28:12 np0005541603 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  1 16:28:12 np0005541603 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec  1 16:28:12 np0005541603 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  1 16:28:12 np0005541603 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  1 16:28:12 np0005541603 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  1 16:28:12 np0005541603 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  1 16:28:12 np0005541603 kernel: Spectre V2 : Mitigation: Retpolines
Dec  1 16:28:12 np0005541603 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  1 16:28:12 np0005541603 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  1 16:28:12 np0005541603 kernel: RETBleed: Mitigation: untrained return thunk
Dec  1 16:28:12 np0005541603 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  1 16:28:12 np0005541603 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  1 16:28:12 np0005541603 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  1 16:28:12 np0005541603 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  1 16:28:12 np0005541603 kernel: x86/bugs: return thunk changed
Dec  1 16:28:12 np0005541603 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  1 16:28:12 np0005541603 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  1 16:28:12 np0005541603 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  1 16:28:12 np0005541603 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  1 16:28:12 np0005541603 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  1 16:28:12 np0005541603 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  1 16:28:12 np0005541603 kernel: Freeing SMP alternatives memory: 40K
Dec  1 16:28:12 np0005541603 kernel: pid_max: default: 32768 minimum: 301
Dec  1 16:28:12 np0005541603 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  1 16:28:12 np0005541603 kernel: landlock: Up and running.
Dec  1 16:28:12 np0005541603 kernel: Yama: becoming mindful.
Dec  1 16:28:12 np0005541603 kernel: SELinux:  Initializing.
Dec  1 16:28:12 np0005541603 kernel: LSM support for eBPF active
Dec  1 16:28:12 np0005541603 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  1 16:28:12 np0005541603 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  1 16:28:12 np0005541603 kernel: ... version:                0
Dec  1 16:28:12 np0005541603 kernel: ... bit width:              48
Dec  1 16:28:12 np0005541603 kernel: ... generic registers:      6
Dec  1 16:28:12 np0005541603 kernel: ... value mask:             0000ffffffffffff
Dec  1 16:28:12 np0005541603 kernel: ... max period:             00007fffffffffff
Dec  1 16:28:12 np0005541603 kernel: ... fixed-purpose events:   0
Dec  1 16:28:12 np0005541603 kernel: ... event mask:             000000000000003f
Dec  1 16:28:12 np0005541603 kernel: signal: max sigframe size: 1776
Dec  1 16:28:12 np0005541603 kernel: rcu: Hierarchical SRCU implementation.
Dec  1 16:28:12 np0005541603 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  1 16:28:12 np0005541603 kernel: smp: Bringing up secondary CPUs ...
Dec  1 16:28:12 np0005541603 kernel: smpboot: x86: Booting SMP configuration:
Dec  1 16:28:12 np0005541603 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  1 16:28:12 np0005541603 kernel: smp: Brought up 1 node, 8 CPUs
Dec  1 16:28:12 np0005541603 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec  1 16:28:12 np0005541603 kernel: node 0 deferred pages initialised in 10ms
Dec  1 16:28:12 np0005541603 kernel: Memory: 7774696K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 607500K reserved, 0K cma-reserved)
Dec  1 16:28:12 np0005541603 kernel: devtmpfs: initialized
Dec  1 16:28:12 np0005541603 kernel: x86/mm: Memory block size: 128MB
Dec  1 16:28:12 np0005541603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  1 16:28:12 np0005541603 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  1 16:28:12 np0005541603 kernel: pinctrl core: initialized pinctrl subsystem
Dec  1 16:28:12 np0005541603 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  1 16:28:12 np0005541603 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  1 16:28:12 np0005541603 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  1 16:28:12 np0005541603 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  1 16:28:12 np0005541603 kernel: audit: initializing netlink subsys (disabled)
Dec  1 16:28:12 np0005541603 kernel: audit: type=2000 audit(1764624490.835:1): state=initialized audit_enabled=0 res=1
Dec  1 16:28:12 np0005541603 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  1 16:28:12 np0005541603 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  1 16:28:12 np0005541603 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  1 16:28:12 np0005541603 kernel: cpuidle: using governor menu
Dec  1 16:28:12 np0005541603 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  1 16:28:12 np0005541603 kernel: PCI: Using configuration type 1 for base access
Dec  1 16:28:12 np0005541603 kernel: PCI: Using configuration type 1 for extended access
Dec  1 16:28:12 np0005541603 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  1 16:28:12 np0005541603 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  1 16:28:12 np0005541603 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  1 16:28:12 np0005541603 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  1 16:28:12 np0005541603 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  1 16:28:12 np0005541603 kernel: Demotion targets for Node 0: null
Dec  1 16:28:12 np0005541603 kernel: cryptd: max_cpu_qlen set to 1000
Dec  1 16:28:12 np0005541603 kernel: ACPI: Added _OSI(Module Device)
Dec  1 16:28:12 np0005541603 kernel: ACPI: Added _OSI(Processor Device)
Dec  1 16:28:12 np0005541603 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  1 16:28:12 np0005541603 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  1 16:28:12 np0005541603 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  1 16:28:12 np0005541603 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  1 16:28:12 np0005541603 kernel: ACPI: Interpreter enabled
Dec  1 16:28:12 np0005541603 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  1 16:28:12 np0005541603 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  1 16:28:12 np0005541603 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  1 16:28:12 np0005541603 kernel: PCI: Using E820 reservations for host bridge windows
Dec  1 16:28:12 np0005541603 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  1 16:28:12 np0005541603 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  1 16:28:12 np0005541603 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [3] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [4] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [5] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [6] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [7] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [8] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [9] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [10] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [11] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [12] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [13] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [14] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [15] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [16] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [17] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [18] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [19] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [20] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [21] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [22] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [23] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [24] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [25] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [26] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [27] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [28] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [29] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [30] registered
Dec  1 16:28:12 np0005541603 kernel: acpiphp: Slot [31] registered
Dec  1 16:28:12 np0005541603 kernel: PCI host bridge to bus 0000:00
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  1 16:28:12 np0005541603 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  1 16:28:12 np0005541603 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  1 16:28:12 np0005541603 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  1 16:28:12 np0005541603 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  1 16:28:12 np0005541603 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  1 16:28:12 np0005541603 kernel: iommu: Default domain type: Translated
Dec  1 16:28:12 np0005541603 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  1 16:28:12 np0005541603 kernel: SCSI subsystem initialized
Dec  1 16:28:12 np0005541603 kernel: ACPI: bus type USB registered
Dec  1 16:28:12 np0005541603 kernel: usbcore: registered new interface driver usbfs
Dec  1 16:28:12 np0005541603 kernel: usbcore: registered new interface driver hub
Dec  1 16:28:12 np0005541603 kernel: usbcore: registered new device driver usb
Dec  1 16:28:12 np0005541603 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  1 16:28:12 np0005541603 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  1 16:28:12 np0005541603 kernel: PTP clock support registered
Dec  1 16:28:12 np0005541603 kernel: EDAC MC: Ver: 3.0.0
Dec  1 16:28:12 np0005541603 kernel: NetLabel: Initializing
Dec  1 16:28:12 np0005541603 kernel: NetLabel:  domain hash size = 128
Dec  1 16:28:12 np0005541603 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  1 16:28:12 np0005541603 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  1 16:28:12 np0005541603 kernel: PCI: Using ACPI for IRQ routing
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  1 16:28:12 np0005541603 kernel: vgaarb: loaded
Dec  1 16:28:12 np0005541603 kernel: clocksource: Switched to clocksource kvm-clock
Dec  1 16:28:12 np0005541603 kernel: VFS: Disk quotas dquot_6.6.0
Dec  1 16:28:12 np0005541603 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  1 16:28:12 np0005541603 kernel: pnp: PnP ACPI init
Dec  1 16:28:12 np0005541603 kernel: pnp: PnP ACPI: found 5 devices
Dec  1 16:28:12 np0005541603 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  1 16:28:12 np0005541603 kernel: NET: Registered PF_INET protocol family
Dec  1 16:28:12 np0005541603 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  1 16:28:12 np0005541603 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 16:28:12 np0005541603 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  1 16:28:12 np0005541603 kernel: NET: Registered PF_XDP protocol family
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  1 16:28:12 np0005541603 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  1 16:28:12 np0005541603 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  1 16:28:12 np0005541603 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74414 usecs
Dec  1 16:28:12 np0005541603 kernel: PCI: CLS 0 bytes, default 64
Dec  1 16:28:12 np0005541603 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  1 16:28:12 np0005541603 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  1 16:28:12 np0005541603 kernel: ACPI: bus type thunderbolt registered
Dec  1 16:28:12 np0005541603 kernel: Trying to unpack rootfs image as initramfs...
Dec  1 16:28:12 np0005541603 kernel: Initialise system trusted keyrings
Dec  1 16:28:12 np0005541603 kernel: Key type blacklist registered
Dec  1 16:28:12 np0005541603 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  1 16:28:12 np0005541603 kernel: zbud: loaded
Dec  1 16:28:12 np0005541603 kernel: integrity: Platform Keyring initialized
Dec  1 16:28:12 np0005541603 kernel: integrity: Machine keyring initialized
Dec  1 16:28:12 np0005541603 kernel: Freeing initrd memory: 77096K
Dec  1 16:28:12 np0005541603 kernel: NET: Registered PF_ALG protocol family
Dec  1 16:28:12 np0005541603 kernel: xor: automatically using best checksumming function   avx       
Dec  1 16:28:12 np0005541603 kernel: Key type asymmetric registered
Dec  1 16:28:12 np0005541603 kernel: Asymmetric key parser 'x509' registered
Dec  1 16:28:12 np0005541603 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  1 16:28:12 np0005541603 kernel: io scheduler mq-deadline registered
Dec  1 16:28:12 np0005541603 kernel: io scheduler kyber registered
Dec  1 16:28:12 np0005541603 kernel: io scheduler bfq registered
Dec  1 16:28:12 np0005541603 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  1 16:28:12 np0005541603 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  1 16:28:12 np0005541603 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  1 16:28:12 np0005541603 kernel: ACPI: button: Power Button [PWRF]
Dec  1 16:28:12 np0005541603 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  1 16:28:12 np0005541603 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  1 16:28:12 np0005541603 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  1 16:28:12 np0005541603 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  1 16:28:12 np0005541603 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  1 16:28:12 np0005541603 kernel: Non-volatile memory driver v1.3
Dec  1 16:28:12 np0005541603 kernel: rdac: device handler registered
Dec  1 16:28:12 np0005541603 kernel: hp_sw: device handler registered
Dec  1 16:28:12 np0005541603 kernel: emc: device handler registered
Dec  1 16:28:12 np0005541603 kernel: alua: device handler registered
Dec  1 16:28:12 np0005541603 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  1 16:28:12 np0005541603 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  1 16:28:12 np0005541603 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  1 16:28:12 np0005541603 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  1 16:28:12 np0005541603 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  1 16:28:12 np0005541603 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  1 16:28:12 np0005541603 kernel: usb usb1: Product: UHCI Host Controller
Dec  1 16:28:12 np0005541603 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  1 16:28:12 np0005541603 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  1 16:28:12 np0005541603 kernel: hub 1-0:1.0: USB hub found
Dec  1 16:28:12 np0005541603 kernel: hub 1-0:1.0: 2 ports detected
Dec  1 16:28:12 np0005541603 kernel: usbcore: registered new interface driver usbserial_generic
Dec  1 16:28:12 np0005541603 kernel: usbserial: USB Serial support registered for generic
Dec  1 16:28:12 np0005541603 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  1 16:28:12 np0005541603 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  1 16:28:12 np0005541603 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  1 16:28:12 np0005541603 kernel: mousedev: PS/2 mouse device common for all mice
Dec  1 16:28:12 np0005541603 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  1 16:28:12 np0005541603 kernel: rtc_cmos 00:04: registered as rtc0
Dec  1 16:28:12 np0005541603 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  1 16:28:12 np0005541603 kernel: rtc_cmos 00:04: setting system clock to 2025-12-01T21:28:11 UTC (1764624491)
Dec  1 16:28:12 np0005541603 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  1 16:28:12 np0005541603 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  1 16:28:12 np0005541603 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  1 16:28:12 np0005541603 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  1 16:28:12 np0005541603 kernel: usbcore: registered new interface driver usbhid
Dec  1 16:28:12 np0005541603 kernel: usbhid: USB HID core driver
Dec  1 16:28:12 np0005541603 kernel: drop_monitor: Initializing network drop monitor service
Dec  1 16:28:12 np0005541603 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  1 16:28:12 np0005541603 kernel: Initializing XFRM netlink socket
Dec  1 16:28:12 np0005541603 kernel: NET: Registered PF_INET6 protocol family
Dec  1 16:28:12 np0005541603 kernel: Segment Routing with IPv6
Dec  1 16:28:12 np0005541603 kernel: NET: Registered PF_PACKET protocol family
Dec  1 16:28:12 np0005541603 kernel: mpls_gso: MPLS GSO support
Dec  1 16:28:12 np0005541603 kernel: IPI shorthand broadcast: enabled
Dec  1 16:28:12 np0005541603 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  1 16:28:12 np0005541603 kernel: AES CTR mode by8 optimization enabled
Dec  1 16:28:12 np0005541603 kernel: sched_clock: Marking stable (1184002749, 145169440)->(1413633869, -84461680)
Dec  1 16:28:12 np0005541603 kernel: registered taskstats version 1
Dec  1 16:28:12 np0005541603 kernel: Loading compiled-in X.509 certificates
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  1 16:28:12 np0005541603 kernel: Demotion targets for Node 0: null
Dec  1 16:28:12 np0005541603 kernel: page_owner is disabled
Dec  1 16:28:12 np0005541603 kernel: Key type .fscrypt registered
Dec  1 16:28:12 np0005541603 kernel: Key type fscrypt-provisioning registered
Dec  1 16:28:12 np0005541603 kernel: Key type big_key registered
Dec  1 16:28:12 np0005541603 kernel: Key type encrypted registered
Dec  1 16:28:12 np0005541603 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  1 16:28:12 np0005541603 kernel: Loading compiled-in module X.509 certificates
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  1 16:28:12 np0005541603 kernel: ima: Allocated hash algorithm: sha256
Dec  1 16:28:12 np0005541603 kernel: ima: No architecture policies found
Dec  1 16:28:12 np0005541603 kernel: evm: Initialising EVM extended attributes:
Dec  1 16:28:12 np0005541603 kernel: evm: security.selinux
Dec  1 16:28:12 np0005541603 kernel: evm: security.SMACK64 (disabled)
Dec  1 16:28:12 np0005541603 kernel: evm: security.SMACK64EXEC (disabled)
Dec  1 16:28:12 np0005541603 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  1 16:28:12 np0005541603 kernel: evm: security.SMACK64MMAP (disabled)
Dec  1 16:28:12 np0005541603 kernel: evm: security.apparmor (disabled)
Dec  1 16:28:12 np0005541603 kernel: evm: security.ima
Dec  1 16:28:12 np0005541603 kernel: evm: security.capability
Dec  1 16:28:12 np0005541603 kernel: evm: HMAC attrs: 0x1
Dec  1 16:28:12 np0005541603 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  1 16:28:12 np0005541603 kernel: Running certificate verification RSA selftest
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  1 16:28:12 np0005541603 kernel: Running certificate verification ECDSA selftest
Dec  1 16:28:12 np0005541603 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  1 16:28:12 np0005541603 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  1 16:28:12 np0005541603 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  1 16:28:12 np0005541603 kernel: usb 1-1: Manufacturer: QEMU
Dec  1 16:28:12 np0005541603 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  1 16:28:12 np0005541603 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  1 16:28:12 np0005541603 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  1 16:28:12 np0005541603 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  1 16:28:12 np0005541603 kernel: clk: Disabling unused clocks
Dec  1 16:28:12 np0005541603 kernel: Freeing unused decrypted memory: 2028K
Dec  1 16:28:12 np0005541603 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  1 16:28:12 np0005541603 kernel: Write protecting the kernel read-only data: 30720k
Dec  1 16:28:12 np0005541603 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  1 16:28:12 np0005541603 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  1 16:28:12 np0005541603 kernel: Run /init as init process
Dec  1 16:28:12 np0005541603 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 16:28:12 np0005541603 systemd: Detected virtualization kvm.
Dec  1 16:28:12 np0005541603 systemd: Detected architecture x86-64.
Dec  1 16:28:12 np0005541603 systemd: Running in initrd.
Dec  1 16:28:12 np0005541603 systemd: No hostname configured, using default hostname.
Dec  1 16:28:12 np0005541603 systemd: Hostname set to <localhost>.
Dec  1 16:28:12 np0005541603 systemd: Initializing machine ID from VM UUID.
Dec  1 16:28:12 np0005541603 systemd: Queued start job for default target Initrd Default Target.
Dec  1 16:28:12 np0005541603 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 16:28:12 np0005541603 systemd: Reached target Local Encrypted Volumes.
Dec  1 16:28:12 np0005541603 systemd: Reached target Initrd /usr File System.
Dec  1 16:28:12 np0005541603 systemd: Reached target Local File Systems.
Dec  1 16:28:12 np0005541603 systemd: Reached target Path Units.
Dec  1 16:28:12 np0005541603 systemd: Reached target Slice Units.
Dec  1 16:28:12 np0005541603 systemd: Reached target Swaps.
Dec  1 16:28:12 np0005541603 systemd: Reached target Timer Units.
Dec  1 16:28:12 np0005541603 systemd: Listening on D-Bus System Message Bus Socket.
Dec  1 16:28:12 np0005541603 systemd: Listening on Journal Socket (/dev/log).
Dec  1 16:28:12 np0005541603 systemd: Listening on Journal Socket.
Dec  1 16:28:12 np0005541603 systemd: Listening on udev Control Socket.
Dec  1 16:28:12 np0005541603 systemd: Listening on udev Kernel Socket.
Dec  1 16:28:12 np0005541603 systemd: Reached target Socket Units.
Dec  1 16:28:12 np0005541603 systemd: Starting Create List of Static Device Nodes...
Dec  1 16:28:12 np0005541603 systemd: Starting Journal Service...
Dec  1 16:28:12 np0005541603 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 16:28:12 np0005541603 systemd: Starting Apply Kernel Variables...
Dec  1 16:28:12 np0005541603 systemd: Starting Create System Users...
Dec  1 16:28:12 np0005541603 systemd: Starting Setup Virtual Console...
Dec  1 16:28:12 np0005541603 systemd: Finished Create List of Static Device Nodes.
Dec  1 16:28:12 np0005541603 systemd: Finished Apply Kernel Variables.
Dec  1 16:28:12 np0005541603 systemd: Finished Create System Users.
Dec  1 16:28:12 np0005541603 systemd-journald[303]: Journal started
Dec  1 16:28:12 np0005541603 systemd-journald[303]: Runtime Journal (/run/log/journal/76dcf733b3f84a5282fd91cdbadb534b) is 8.0M, max 153.6M, 145.6M free.
Dec  1 16:28:12 np0005541603 systemd-sysusers[307]: Creating group 'users' with GID 100.
Dec  1 16:28:12 np0005541603 systemd-sysusers[307]: Creating group 'dbus' with GID 81.
Dec  1 16:28:12 np0005541603 systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  1 16:28:12 np0005541603 systemd: Starting Create Static Device Nodes in /dev...
Dec  1 16:28:12 np0005541603 systemd: Started Journal Service.
Dec  1 16:28:12 np0005541603 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 16:28:12 np0005541603 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 16:28:12 np0005541603 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 16:28:12 np0005541603 systemd[1]: Finished Setup Virtual Console.
Dec  1 16:28:12 np0005541603 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  1 16:28:12 np0005541603 systemd[1]: Starting dracut cmdline hook...
Dec  1 16:28:12 np0005541603 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Dec  1 16:28:12 np0005541603 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 16:28:12 np0005541603 systemd[1]: Finished dracut cmdline hook.
Dec  1 16:28:12 np0005541603 systemd[1]: Starting dracut pre-udev hook...
Dec  1 16:28:12 np0005541603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  1 16:28:12 np0005541603 kernel: device-mapper: uevent: version 1.0.3
Dec  1 16:28:12 np0005541603 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  1 16:28:12 np0005541603 kernel: RPC: Registered named UNIX socket transport module.
Dec  1 16:28:12 np0005541603 kernel: RPC: Registered udp transport module.
Dec  1 16:28:12 np0005541603 kernel: RPC: Registered tcp transport module.
Dec  1 16:28:12 np0005541603 kernel: RPC: Registered tcp-with-tls transport module.
Dec  1 16:28:12 np0005541603 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  1 16:28:13 np0005541603 rpc.statd[439]: Version 2.5.4 starting
Dec  1 16:28:13 np0005541603 rpc.statd[439]: Initializing NSM state
Dec  1 16:28:13 np0005541603 rpc.idmapd[444]: Setting log level to 0
Dec  1 16:28:13 np0005541603 systemd[1]: Finished dracut pre-udev hook.
Dec  1 16:28:13 np0005541603 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 16:28:13 np0005541603 systemd-udevd[457]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 16:28:13 np0005541603 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 16:28:13 np0005541603 systemd[1]: Starting dracut pre-trigger hook...
Dec  1 16:28:13 np0005541603 systemd[1]: Finished dracut pre-trigger hook.
Dec  1 16:28:13 np0005541603 systemd[1]: Starting Coldplug All udev Devices...
Dec  1 16:28:13 np0005541603 systemd[1]: Created slice Slice /system/modprobe.
Dec  1 16:28:13 np0005541603 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 16:28:13 np0005541603 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 16:28:13 np0005541603 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 16:28:13 np0005541603 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 16:28:13 np0005541603 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 16:28:13 np0005541603 systemd[1]: Reached target Network.
Dec  1 16:28:13 np0005541603 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 16:28:13 np0005541603 systemd[1]: Starting dracut initqueue hook...
Dec  1 16:28:13 np0005541603 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  1 16:28:13 np0005541603 systemd[1]: Mounting Kernel Configuration File System...
Dec  1 16:28:13 np0005541603 systemd[1]: Mounted Kernel Configuration File System.
Dec  1 16:28:13 np0005541603 systemd[1]: Reached target System Initialization.
Dec  1 16:28:13 np0005541603 systemd[1]: Reached target Basic System.
Dec  1 16:28:13 np0005541603 kernel: scsi host0: ata_piix
Dec  1 16:28:13 np0005541603 kernel: scsi host1: ata_piix
Dec  1 16:28:13 np0005541603 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  1 16:28:13 np0005541603 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  1 16:28:13 np0005541603 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  1 16:28:13 np0005541603 kernel: vda: vda1
Dec  1 16:28:13 np0005541603 kernel: ata1: found unknown device (class 0)
Dec  1 16:28:13 np0005541603 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  1 16:28:13 np0005541603 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  1 16:28:13 np0005541603 systemd-udevd[471]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 16:28:13 np0005541603 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  1 16:28:13 np0005541603 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 16:28:13 np0005541603 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  1 16:28:13 np0005541603 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  1 16:28:13 np0005541603 systemd[1]: Reached target Initrd Root Device.
Dec  1 16:28:13 np0005541603 systemd[1]: Finished dracut initqueue hook.
Dec  1 16:28:13 np0005541603 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 16:28:13 np0005541603 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  1 16:28:13 np0005541603 systemd[1]: Reached target Remote File Systems.
Dec  1 16:28:13 np0005541603 systemd[1]: Starting dracut pre-mount hook...
Dec  1 16:28:13 np0005541603 systemd[1]: Finished dracut pre-mount hook.
Dec  1 16:28:13 np0005541603 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Dec  1 16:28:13 np0005541603 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec  1 16:28:13 np0005541603 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 16:28:13 np0005541603 systemd[1]: Mounting /sysroot...
Dec  1 16:28:14 np0005541603 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  1 16:28:14 np0005541603 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Dec  1 16:28:14 np0005541603 kernel: XFS (vda1): Ending clean mount
Dec  1 16:28:14 np0005541603 systemd[1]: Mounted /sysroot.
Dec  1 16:28:14 np0005541603 systemd[1]: Reached target Initrd Root File System.
Dec  1 16:28:14 np0005541603 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  1 16:28:14 np0005541603 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  1 16:28:14 np0005541603 systemd[1]: Reached target Initrd File Systems.
Dec  1 16:28:14 np0005541603 systemd[1]: Reached target Initrd Default Target.
Dec  1 16:28:14 np0005541603 systemd[1]: Starting dracut mount hook...
Dec  1 16:28:14 np0005541603 systemd[1]: Finished dracut mount hook.
Dec  1 16:28:14 np0005541603 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  1 16:28:14 np0005541603 rpc.idmapd[444]: exiting on signal 15
Dec  1 16:28:14 np0005541603 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  1 16:28:14 np0005541603 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Network.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Timer Units.
Dec  1 16:28:14 np0005541603 systemd[1]: dbus.socket: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  1 16:28:14 np0005541603 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Initrd Default Target.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Basic System.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Initrd Root Device.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Initrd /usr File System.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Path Units.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Remote File Systems.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Slice Units.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Socket Units.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target System Initialization.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Local File Systems.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Swaps.
Dec  1 16:28:14 np0005541603 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped dracut mount hook.
Dec  1 16:28:14 np0005541603 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped dracut pre-mount hook.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  1 16:28:14 np0005541603 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped dracut initqueue hook.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Coldplug All udev Devices.
Dec  1 16:28:14 np0005541603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped dracut pre-trigger hook.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Setup Virtual Console.
Dec  1 16:28:14 np0005541603 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-udevd.service: Consumed 1.167s CPU time.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Closed udev Control Socket.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Closed udev Kernel Socket.
Dec  1 16:28:14 np0005541603 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped dracut pre-udev hook.
Dec  1 16:28:14 np0005541603 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped dracut cmdline hook.
Dec  1 16:28:14 np0005541603 systemd[1]: Starting Cleanup udev Database...
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  1 16:28:14 np0005541603 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  1 16:28:14 np0005541603 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Stopped Create System Users.
Dec  1 16:28:14 np0005541603 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  1 16:28:14 np0005541603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  1 16:28:14 np0005541603 systemd[1]: Finished Cleanup udev Database.
Dec  1 16:28:14 np0005541603 systemd[1]: Reached target Switch Root.
Dec  1 16:28:14 np0005541603 systemd[1]: Starting Switch Root...
Dec  1 16:28:14 np0005541603 systemd[1]: Switching root.
Dec  1 16:28:14 np0005541603 systemd-journald[303]: Journal stopped
Dec  1 16:28:15 np0005541603 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  1 16:28:15 np0005541603 kernel: audit: type=1404 audit(1764624494.873:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  1 16:28:15 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 16:28:15 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 16:28:15 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 16:28:15 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 16:28:15 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 16:28:15 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 16:28:15 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 16:28:15 np0005541603 kernel: audit: type=1403 audit(1764624495.034:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  1 16:28:15 np0005541603 systemd: Successfully loaded SELinux policy in 167.403ms.
Dec  1 16:28:15 np0005541603 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.355ms.
Dec  1 16:28:15 np0005541603 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 16:28:15 np0005541603 systemd: Detected virtualization kvm.
Dec  1 16:28:15 np0005541603 systemd: Detected architecture x86-64.
Dec  1 16:28:15 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 16:28:15 np0005541603 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  1 16:28:15 np0005541603 systemd: Stopped Switch Root.
Dec  1 16:28:15 np0005541603 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  1 16:28:15 np0005541603 systemd: Created slice Slice /system/getty.
Dec  1 16:28:15 np0005541603 systemd: Created slice Slice /system/serial-getty.
Dec  1 16:28:15 np0005541603 systemd: Created slice Slice /system/sshd-keygen.
Dec  1 16:28:15 np0005541603 systemd: Created slice User and Session Slice.
Dec  1 16:28:15 np0005541603 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 16:28:15 np0005541603 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  1 16:28:15 np0005541603 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  1 16:28:15 np0005541603 systemd: Reached target Local Encrypted Volumes.
Dec  1 16:28:15 np0005541603 systemd: Stopped target Switch Root.
Dec  1 16:28:15 np0005541603 systemd: Stopped target Initrd File Systems.
Dec  1 16:28:15 np0005541603 systemd: Stopped target Initrd Root File System.
Dec  1 16:28:15 np0005541603 systemd: Reached target Local Integrity Protected Volumes.
Dec  1 16:28:15 np0005541603 systemd: Reached target Path Units.
Dec  1 16:28:15 np0005541603 systemd: Reached target rpc_pipefs.target.
Dec  1 16:28:15 np0005541603 systemd: Reached target Slice Units.
Dec  1 16:28:15 np0005541603 systemd: Reached target Swaps.
Dec  1 16:28:15 np0005541603 systemd: Reached target Local Verity Protected Volumes.
Dec  1 16:28:15 np0005541603 systemd: Listening on RPCbind Server Activation Socket.
Dec  1 16:28:15 np0005541603 systemd: Reached target RPC Port Mapper.
Dec  1 16:28:15 np0005541603 systemd: Listening on Process Core Dump Socket.
Dec  1 16:28:15 np0005541603 systemd: Listening on initctl Compatibility Named Pipe.
Dec  1 16:28:15 np0005541603 systemd: Listening on udev Control Socket.
Dec  1 16:28:15 np0005541603 systemd: Listening on udev Kernel Socket.
Dec  1 16:28:15 np0005541603 systemd: Mounting Huge Pages File System...
Dec  1 16:28:15 np0005541603 systemd: Mounting POSIX Message Queue File System...
Dec  1 16:28:15 np0005541603 systemd: Mounting Kernel Debug File System...
Dec  1 16:28:15 np0005541603 systemd: Mounting Kernel Trace File System...
Dec  1 16:28:15 np0005541603 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 16:28:15 np0005541603 systemd: Starting Create List of Static Device Nodes...
Dec  1 16:28:15 np0005541603 systemd: Starting Load Kernel Module configfs...
Dec  1 16:28:15 np0005541603 systemd: Starting Load Kernel Module drm...
Dec  1 16:28:15 np0005541603 systemd: Starting Load Kernel Module efi_pstore...
Dec  1 16:28:15 np0005541603 systemd: Starting Load Kernel Module fuse...
Dec  1 16:28:15 np0005541603 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  1 16:28:15 np0005541603 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  1 16:28:15 np0005541603 systemd: Stopped File System Check on Root Device.
Dec  1 16:28:15 np0005541603 systemd: Stopped Journal Service.
Dec  1 16:28:15 np0005541603 systemd: Starting Journal Service...
Dec  1 16:28:15 np0005541603 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 16:28:15 np0005541603 systemd: Starting Generate network units from Kernel command line...
Dec  1 16:28:15 np0005541603 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 16:28:15 np0005541603 systemd: Starting Remount Root and Kernel File Systems...
Dec  1 16:28:15 np0005541603 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  1 16:28:15 np0005541603 systemd: Starting Apply Kernel Variables...
Dec  1 16:28:15 np0005541603 systemd: Starting Coldplug All udev Devices...
Dec  1 16:28:15 np0005541603 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  1 16:28:15 np0005541603 systemd: Mounted Huge Pages File System.
Dec  1 16:28:15 np0005541603 kernel: fuse: init (API version 7.37)
Dec  1 16:28:15 np0005541603 systemd: Mounted POSIX Message Queue File System.
Dec  1 16:28:15 np0005541603 systemd: Mounted Kernel Debug File System.
Dec  1 16:28:15 np0005541603 systemd: Mounted Kernel Trace File System.
Dec  1 16:28:15 np0005541603 systemd: Finished Create List of Static Device Nodes.
Dec  1 16:28:15 np0005541603 systemd: modprobe@configfs.service: Deactivated successfully.
Dec  1 16:28:15 np0005541603 systemd: Finished Load Kernel Module configfs.
Dec  1 16:28:15 np0005541603 systemd-journald[678]: Journal started
Dec  1 16:28:15 np0005541603 systemd-journald[678]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 16:28:15 np0005541603 systemd[1]: Queued start job for default target Multi-User System.
Dec  1 16:28:15 np0005541603 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  1 16:28:15 np0005541603 systemd: Started Journal Service.
Dec  1 16:28:15 np0005541603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  1 16:28:15 np0005541603 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Load Kernel Module fuse.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Generate network units from Kernel command line.
Dec  1 16:28:15 np0005541603 kernel: ACPI: bus type drm_connector registered
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  1 16:28:15 np0005541603 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Load Kernel Module drm.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Apply Kernel Variables.
Dec  1 16:28:15 np0005541603 systemd[1]: Mounting FUSE Control File System...
Dec  1 16:28:15 np0005541603 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Rebuild Hardware Database...
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  1 16:28:15 np0005541603 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Load/Save OS Random Seed...
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Create System Users...
Dec  1 16:28:15 np0005541603 systemd[1]: Mounted FUSE Control File System.
Dec  1 16:28:15 np0005541603 systemd-journald[678]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 16:28:15 np0005541603 systemd-journald[678]: Received client request to flush runtime journal.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Load/Save OS Random Seed.
Dec  1 16:28:15 np0005541603 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Create System Users.
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 16:28:15 np0005541603 systemd[1]: Reached target Preparation for Local File Systems.
Dec  1 16:28:15 np0005541603 systemd[1]: Reached target Local File Systems.
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  1 16:28:15 np0005541603 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  1 16:28:15 np0005541603 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  1 16:28:15 np0005541603 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Automatic Boot Loader Update...
Dec  1 16:28:15 np0005541603 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  1 16:28:15 np0005541603 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 16:28:15 np0005541603 bootctl[698]: Couldn't find EFI system partition, skipping.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Automatic Boot Loader Update.
Dec  1 16:28:15 np0005541603 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Security Auditing Service...
Dec  1 16:28:16 np0005541603 systemd[1]: Starting RPC Bind...
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Rebuild Journal Catalog...
Dec  1 16:28:16 np0005541603 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  1 16:28:16 np0005541603 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  1 16:28:16 np0005541603 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  1 16:28:16 np0005541603 systemd[1]: Finished Rebuild Journal Catalog.
Dec  1 16:28:16 np0005541603 systemd[1]: Started RPC Bind.
Dec  1 16:28:16 np0005541603 augenrules[709]: /sbin/augenrules: No change
Dec  1 16:28:16 np0005541603 augenrules[724]: No rules
Dec  1 16:28:16 np0005541603 augenrules[724]: enabled 1
Dec  1 16:28:16 np0005541603 augenrules[724]: failure 1
Dec  1 16:28:16 np0005541603 augenrules[724]: pid 704
Dec  1 16:28:16 np0005541603 augenrules[724]: rate_limit 0
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_limit 8192
Dec  1 16:28:16 np0005541603 augenrules[724]: lost 0
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog 3
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_wait_time 60000
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_wait_time_actual 0
Dec  1 16:28:16 np0005541603 augenrules[724]: enabled 1
Dec  1 16:28:16 np0005541603 augenrules[724]: failure 1
Dec  1 16:28:16 np0005541603 augenrules[724]: pid 704
Dec  1 16:28:16 np0005541603 augenrules[724]: rate_limit 0
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_limit 8192
Dec  1 16:28:16 np0005541603 augenrules[724]: lost 0
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog 2
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_wait_time 60000
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_wait_time_actual 0
Dec  1 16:28:16 np0005541603 augenrules[724]: enabled 1
Dec  1 16:28:16 np0005541603 augenrules[724]: failure 1
Dec  1 16:28:16 np0005541603 augenrules[724]: pid 704
Dec  1 16:28:16 np0005541603 augenrules[724]: rate_limit 0
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_limit 8192
Dec  1 16:28:16 np0005541603 augenrules[724]: lost 0
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog 3
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_wait_time 60000
Dec  1 16:28:16 np0005541603 augenrules[724]: backlog_wait_time_actual 0
Dec  1 16:28:16 np0005541603 systemd[1]: Started Security Auditing Service.
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  1 16:28:16 np0005541603 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  1 16:28:16 np0005541603 systemd[1]: Finished Rebuild Hardware Database.
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Update is Completed...
Dec  1 16:28:16 np0005541603 systemd[1]: Finished Update is Completed.
Dec  1 16:28:16 np0005541603 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 16:28:16 np0005541603 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 16:28:16 np0005541603 systemd[1]: Reached target System Initialization.
Dec  1 16:28:16 np0005541603 systemd[1]: Started dnf makecache --timer.
Dec  1 16:28:16 np0005541603 systemd[1]: Started Daily rotation of log files.
Dec  1 16:28:16 np0005541603 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  1 16:28:16 np0005541603 systemd[1]: Reached target Timer Units.
Dec  1 16:28:16 np0005541603 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  1 16:28:16 np0005541603 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  1 16:28:16 np0005541603 systemd[1]: Reached target Socket Units.
Dec  1 16:28:16 np0005541603 systemd[1]: Starting D-Bus System Message Bus...
Dec  1 16:28:16 np0005541603 systemd-udevd[739]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 16:28:16 np0005541603 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 16:28:16 np0005541603 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 16:28:16 np0005541603 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 16:28:16 np0005541603 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 16:28:16 np0005541603 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  1 16:28:16 np0005541603 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  1 16:28:16 np0005541603 systemd[1]: Started D-Bus System Message Bus.
Dec  1 16:28:16 np0005541603 systemd[1]: Reached target Basic System.
Dec  1 16:28:16 np0005541603 dbus-broker-lau[770]: Ready
Dec  1 16:28:16 np0005541603 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  1 16:28:16 np0005541603 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  1 16:28:16 np0005541603 systemd[1]: Starting NTP client/server...
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  1 16:28:16 np0005541603 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  1 16:28:16 np0005541603 systemd[1]: Starting IPv4 firewall with iptables...
Dec  1 16:28:16 np0005541603 systemd[1]: Started irqbalance daemon.
Dec  1 16:28:16 np0005541603 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  1 16:28:16 np0005541603 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 16:28:16 np0005541603 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 16:28:16 np0005541603 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 16:28:16 np0005541603 systemd[1]: Reached target sshd-keygen.target.
Dec  1 16:28:16 np0005541603 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  1 16:28:16 np0005541603 systemd[1]: Reached target User and Group Name Lookups.
Dec  1 16:28:16 np0005541603 chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 16:28:16 np0005541603 chronyd[795]: Loaded 0 symmetric keys
Dec  1 16:28:16 np0005541603 chronyd[795]: Using right/UTC timezone to obtain leap second data
Dec  1 16:28:16 np0005541603 chronyd[795]: Loaded seccomp filter (level 2)
Dec  1 16:28:16 np0005541603 systemd[1]: Starting User Login Management...
Dec  1 16:28:16 np0005541603 systemd[1]: Started NTP client/server.
Dec  1 16:28:16 np0005541603 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  1 16:28:16 np0005541603 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  1 16:28:16 np0005541603 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  1 16:28:16 np0005541603 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 16:28:16 np0005541603 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 16:28:16 np0005541603 systemd-logind[788]: New seat seat0.
Dec  1 16:28:16 np0005541603 systemd[1]: Started User Login Management.
Dec  1 16:28:16 np0005541603 kernel: kvm_amd: TSC scaling supported
Dec  1 16:28:16 np0005541603 kernel: kvm_amd: Nested Virtualization enabled
Dec  1 16:28:16 np0005541603 kernel: kvm_amd: Nested Paging enabled
Dec  1 16:28:16 np0005541603 kernel: kvm_amd: LBR virtualization supported
Dec  1 16:28:16 np0005541603 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  1 16:28:16 np0005541603 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  1 16:28:16 np0005541603 kernel: Console: switching to colour dummy device 80x25
Dec  1 16:28:16 np0005541603 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  1 16:28:16 np0005541603 kernel: [drm] features: -context_init
Dec  1 16:28:16 np0005541603 kernel: [drm] number of scanouts: 1
Dec  1 16:28:16 np0005541603 kernel: [drm] number of cap sets: 0
Dec  1 16:28:16 np0005541603 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  1 16:28:16 np0005541603 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  1 16:28:16 np0005541603 kernel: Console: switching to colour frame buffer device 128x48
Dec  1 16:28:16 np0005541603 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  1 16:28:16 np0005541603 iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Dec  1 16:28:16 np0005541603 systemd[1]: Finished IPv4 firewall with iptables.
Dec  1 16:28:17 np0005541603 cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 01 Dec 2025 21:28:17 +0000. Up 6.71 seconds.
Dec  1 16:28:17 np0005541603 systemd[1]: run-cloud\x2dinit-tmp-tmpv2moawnc.mount: Deactivated successfully.
Dec  1 16:28:17 np0005541603 systemd[1]: Starting Hostname Service...
Dec  1 16:28:17 np0005541603 systemd[1]: Started Hostname Service.
Dec  1 16:28:17 np0005541603 systemd-hostnamed[856]: Hostname set to <np0005541603.novalocal> (static)
Dec  1 16:28:17 np0005541603 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  1 16:28:17 np0005541603 systemd[1]: Reached target Preparation for Network.
Dec  1 16:28:17 np0005541603 systemd[1]: Starting Network Manager...
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7299] NetworkManager (version 1.54.1-1.el9) is starting... (boot:7a82d3c7-3900-45d2-a5fc-f942d952501d)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7307] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7430] manager[0x5645fa762080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7488] hostname: hostname: using hostnamed
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7489] hostname: static hostname changed from (none) to "np0005541603.novalocal"
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7498] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7631] manager[0x5645fa762080]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7631] manager[0x5645fa762080]: rfkill: WWAN hardware radio set enabled
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7675] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7675] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7676] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7676] manager: Networking is enabled by state file
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7678] settings: Loaded settings plugin: keyfile (internal)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7691] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7713] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7725] dhcp: init: Using DHCP client 'internal'
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7727] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 16:28:17 np0005541603 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7742] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7781] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7788] device (lo): Activation: starting connection 'lo' (6817b782-5092-4502-b86b-5365c44c46c2)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7795] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7798] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7828] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7831] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7833] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7835] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7837] device (eth0): carrier: link connected
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7839] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7845] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7864] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7874] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7876] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7881] manager: NetworkManager state is now CONNECTING
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7885] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7899] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7905] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 16:28:17 np0005541603 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7964] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.7978] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 16:28:17 np0005541603 systemd[1]: Started Network Manager.
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8019] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 16:28:17 np0005541603 systemd[1]: Reached target Network.
Dec  1 16:28:17 np0005541603 systemd[1]: Starting Network Manager Wait Online...
Dec  1 16:28:17 np0005541603 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  1 16:28:17 np0005541603 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8363] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8368] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8380] device (lo): Activation: successful, device activated.
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8396] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8398] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8405] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8459] device (eth0): Activation: successful, device activated.
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8468] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 16:28:17 np0005541603 NetworkManager[860]: <info>  [1764624497.8474] manager: startup complete
Dec  1 16:28:17 np0005541603 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  1 16:28:17 np0005541603 systemd[1]: Finished Network Manager Wait Online.
Dec  1 16:28:17 np0005541603 systemd[1]: Starting Cloud-init: Network Stage...
Dec  1 16:28:17 np0005541603 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 16:28:17 np0005541603 systemd[1]: Reached target NFS client services.
Dec  1 16:28:17 np0005541603 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 16:28:17 np0005541603 systemd[1]: Reached target Remote File Systems.
Dec  1 16:28:17 np0005541603 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 16:28:18 np0005541603 cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 01 Dec 2025 21:28:18 +0000. Up 7.83 seconds.
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |  eth0  | True |         38.102.83.74         | 255.255.255.0 | global | fa:16:3e:3f:eb:ec |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe3f:ebec/64 |       .       |  link  | fa:16:3e:3f:eb:ec |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  1 16:28:18 np0005541603 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 16:28:22 np0005541603 chronyd[795]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Dec  1 16:28:22 np0005541603 chronyd[795]: System clock TAI offset set to 37 seconds
Dec  1 16:28:27 np0005541603 irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  1 16:28:27 np0005541603 irqbalance[782]: IRQ 25 affinity is now unmanaged
Dec  1 16:28:27 np0005541603 irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  1 16:28:27 np0005541603 irqbalance[782]: IRQ 31 affinity is now unmanaged
Dec  1 16:28:27 np0005541603 irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  1 16:28:27 np0005541603 irqbalance[782]: IRQ 28 affinity is now unmanaged
Dec  1 16:28:27 np0005541603 irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  1 16:28:27 np0005541603 irqbalance[782]: IRQ 32 affinity is now unmanaged
Dec  1 16:28:27 np0005541603 irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  1 16:28:27 np0005541603 irqbalance[782]: IRQ 30 affinity is now unmanaged
Dec  1 16:28:27 np0005541603 irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  1 16:28:27 np0005541603 irqbalance[782]: IRQ 29 affinity is now unmanaged
Dec  1 16:28:28 np0005541603 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 16:28:28 np0005541603 cloud-init[923]: Generating public/private rsa key pair.
Dec  1 16:28:28 np0005541603 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  1 16:28:28 np0005541603 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  1 16:28:28 np0005541603 cloud-init[923]: The key fingerprint is:
Dec  1 16:28:28 np0005541603 cloud-init[923]: SHA256:qcixZVCnU3HBiob3+VEktHl5Tk6V5zf1E7aEjk7XTUE root@np0005541603.novalocal
Dec  1 16:28:28 np0005541603 cloud-init[923]: The key's randomart image is:
Dec  1 16:28:28 np0005541603 cloud-init[923]: +---[RSA 3072]----+
Dec  1 16:28:28 np0005541603 cloud-init[923]: |      . +++.  oE+|
Dec  1 16:28:28 np0005541603 cloud-init[923]: |     . + .oo.o *+|
Dec  1 16:28:28 np0005541603 cloud-init[923]: |    ..o. .oo= BoB|
Dec  1 16:28:28 np0005541603 cloud-init[923]: |    ..+... +.O ==|
Dec  1 16:28:28 np0005541603 cloud-init[923]: |    .oo.S.o.. o +|
Dec  1 16:28:28 np0005541603 cloud-init[923]: |   . * .o ..     |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |    + .  . .     |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |          .      |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |                 |
Dec  1 16:28:28 np0005541603 cloud-init[923]: +----[SHA256]-----+
Dec  1 16:28:28 np0005541603 cloud-init[923]: Generating public/private ecdsa key pair.
Dec  1 16:28:28 np0005541603 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  1 16:28:28 np0005541603 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  1 16:28:28 np0005541603 cloud-init[923]: The key fingerprint is:
Dec  1 16:28:28 np0005541603 cloud-init[923]: SHA256:K+Z/qvk1c1QhVyK5yPIQKFeDWBqYhjDM0jD87qlAKMs root@np0005541603.novalocal
Dec  1 16:28:28 np0005541603 cloud-init[923]: The key's randomart image is:
Dec  1 16:28:28 np0005541603 cloud-init[923]: +---[ECDSA 256]---+
Dec  1 16:28:28 np0005541603 cloud-init[923]: |X+ o.o.+o   o.+..|
Dec  1 16:28:28 np0005541603 cloud-init[923]: |o*= oo+ ..  .+ o |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |... .o   o . ..  |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |.  .    o o ..   |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |o..     S+  .    |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |+. .     ...     |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |oE. . o . + .    |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |.  o o o ..+     |
Dec  1 16:28:28 np0005541603 cloud-init[923]: | ..   +++o       |
Dec  1 16:28:28 np0005541603 cloud-init[923]: +----[SHA256]-----+
Dec  1 16:28:28 np0005541603 cloud-init[923]: Generating public/private ed25519 key pair.
Dec  1 16:28:28 np0005541603 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  1 16:28:28 np0005541603 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  1 16:28:28 np0005541603 cloud-init[923]: The key fingerprint is:
Dec  1 16:28:28 np0005541603 cloud-init[923]: SHA256:HoQh7o21YmFGphasmIipY45D9mUB4k+bzIIZ4COyXgg root@np0005541603.novalocal
Dec  1 16:28:28 np0005541603 cloud-init[923]: The key's randomart image is:
Dec  1 16:28:28 np0005541603 cloud-init[923]: +--[ED25519 256]--+
Dec  1 16:28:28 np0005541603 cloud-init[923]: | .. + .          |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |...B . o         |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |B++ * o .        |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |E= = * o         |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |*+B B + S        |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |** O + . .       |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |B.+ o   .        |
Dec  1 16:28:28 np0005541603 cloud-init[923]: |oo .             |
Dec  1 16:28:28 np0005541603 cloud-init[923]: | .               |
Dec  1 16:28:28 np0005541603 cloud-init[923]: +----[SHA256]-----+
Dec  1 16:28:28 np0005541603 sm-notify[1007]: Version 2.5.4 starting
Dec  1 16:28:28 np0005541603 systemd[1]: Finished Cloud-init: Network Stage.
Dec  1 16:28:28 np0005541603 systemd[1]: Reached target Cloud-config availability.
Dec  1 16:28:28 np0005541603 systemd[1]: Reached target Network is Online.
Dec  1 16:28:28 np0005541603 systemd[1]: Starting Cloud-init: Config Stage...
Dec  1 16:28:28 np0005541603 systemd[1]: Starting Crash recovery kernel arming...
Dec  1 16:28:28 np0005541603 systemd[1]: Starting Notify NFS peers of a restart...
Dec  1 16:28:28 np0005541603 systemd[1]: Starting System Logging Service...
Dec  1 16:28:28 np0005541603 systemd[1]: Starting OpenSSH server daemon...
Dec  1 16:28:28 np0005541603 systemd[1]: Starting Permit User Sessions...
Dec  1 16:28:28 np0005541603 systemd[1]: Started Notify NFS peers of a restart.
Dec  1 16:28:28 np0005541603 systemd[1]: Finished Permit User Sessions.
Dec  1 16:28:28 np0005541603 systemd[1]: Started Command Scheduler.
Dec  1 16:28:28 np0005541603 systemd[1]: Started Getty on tty1.
Dec  1 16:28:28 np0005541603 systemd[1]: Started Serial Getty on ttyS0.
Dec  1 16:28:28 np0005541603 systemd[1]: Reached target Login Prompts.
Dec  1 16:28:28 np0005541603 systemd[1]: Started OpenSSH server daemon.
Dec  1 16:28:28 np0005541603 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Dec  1 16:28:28 np0005541603 rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  1 16:28:28 np0005541603 systemd[1]: Started System Logging Service.
Dec  1 16:28:28 np0005541603 systemd[1]: Reached target Multi-User System.
Dec  1 16:28:28 np0005541603 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  1 16:28:28 np0005541603 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  1 16:28:28 np0005541603 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  1 16:28:28 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 16:28:28 np0005541603 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Dec  1 16:28:28 np0005541603 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  1 16:28:28 np0005541603 cloud-init[1135]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 01 Dec 2025 21:28:28 +0000. Up 18.28 seconds.
Dec  1 16:28:28 np0005541603 systemd[1]: Finished Cloud-init: Config Stage.
Dec  1 16:28:28 np0005541603 systemd[1]: Starting Cloud-init: Final Stage...
Dec  1 16:28:28 np0005541603 dracut[1269]: dracut-057-102.git20250818.el9
Dec  1 16:28:29 np0005541603 cloud-init[1287]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 01 Dec 2025 21:28:29 +0000. Up 18.68 seconds.
Dec  1 16:28:29 np0005541603 cloud-init[1299]: #############################################################
Dec  1 16:28:29 np0005541603 dracut[1271]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  1 16:28:29 np0005541603 cloud-init[1302]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  1 16:28:29 np0005541603 cloud-init[1309]: 256 SHA256:K+Z/qvk1c1QhVyK5yPIQKFeDWBqYhjDM0jD87qlAKMs root@np0005541603.novalocal (ECDSA)
Dec  1 16:28:29 np0005541603 cloud-init[1314]: 256 SHA256:HoQh7o21YmFGphasmIipY45D9mUB4k+bzIIZ4COyXgg root@np0005541603.novalocal (ED25519)
Dec  1 16:28:29 np0005541603 cloud-init[1319]: 3072 SHA256:qcixZVCnU3HBiob3+VEktHl5Tk6V5zf1E7aEjk7XTUE root@np0005541603.novalocal (RSA)
Dec  1 16:28:29 np0005541603 cloud-init[1324]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  1 16:28:29 np0005541603 cloud-init[1325]: #############################################################
Dec  1 16:28:29 np0005541603 cloud-init[1287]: Cloud-init v. 24.4-7.el9 finished at Mon, 01 Dec 2025 21:28:29 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 18.89 seconds
Dec  1 16:28:29 np0005541603 systemd[1]: Finished Cloud-init: Final Stage.
Dec  1 16:28:29 np0005541603 systemd[1]: Reached target Cloud-init target.
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 16:28:29 np0005541603 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: memstrack is not available
Dec  1 16:28:30 np0005541603 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 16:28:30 np0005541603 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 16:28:31 np0005541603 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 16:28:31 np0005541603 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 16:28:31 np0005541603 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 16:28:31 np0005541603 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 16:28:31 np0005541603 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 16:28:31 np0005541603 dracut[1271]: memstrack is not available
Dec  1 16:28:31 np0005541603 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 16:28:31 np0005541603 dracut[1271]: *** Including module: systemd ***
Dec  1 16:28:31 np0005541603 dracut[1271]: *** Including module: fips ***
Dec  1 16:28:32 np0005541603 dracut[1271]: *** Including module: systemd-initrd ***
Dec  1 16:28:32 np0005541603 dracut[1271]: *** Including module: i18n ***
Dec  1 16:28:32 np0005541603 dracut[1271]: *** Including module: drm ***
Dec  1 16:28:33 np0005541603 systemd[1]: serial-getty@ttyS0.service: Deactivated successfully.
Dec  1 16:28:33 np0005541603 dracut[1271]: *** Including module: prefixdevname ***
Dec  1 16:28:33 np0005541603 dracut[1271]: *** Including module: kernel-modules ***
Dec  1 16:28:33 np0005541603 systemd[1]: serial-getty@ttyS0.service: Scheduled restart job, restart counter is at 1.
Dec  1 16:28:33 np0005541603 systemd[1]: Stopped Serial Getty on ttyS0.
Dec  1 16:28:33 np0005541603 systemd[1]: Started Serial Getty on ttyS0.
Dec  1 16:28:34 np0005541603 kernel: block vda: the capability attribute has been deprecated.
Dec  1 16:28:34 np0005541603 dracut[1271]: *** Including module: kernel-modules-extra ***
Dec  1 16:28:34 np0005541603 dracut[1271]: *** Including module: qemu ***
Dec  1 16:28:34 np0005541603 dracut[1271]: *** Including module: fstab-sys ***
Dec  1 16:28:34 np0005541603 dracut[1271]: *** Including module: rootfs-block ***
Dec  1 16:28:34 np0005541603 dracut[1271]: *** Including module: terminfo ***
Dec  1 16:28:34 np0005541603 dracut[1271]: *** Including module: udev-rules ***
Dec  1 16:28:35 np0005541603 dracut[1271]: Skipping udev rule: 91-permissions.rules
Dec  1 16:28:35 np0005541603 dracut[1271]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  1 16:28:35 np0005541603 dracut[1271]: *** Including module: virtiofs ***
Dec  1 16:28:35 np0005541603 dracut[1271]: *** Including module: dracut-systemd ***
Dec  1 16:28:35 np0005541603 dracut[1271]: *** Including module: usrmount ***
Dec  1 16:28:35 np0005541603 dracut[1271]: *** Including module: base ***
Dec  1 16:28:36 np0005541603 dracut[1271]: *** Including module: fs-lib ***
Dec  1 16:28:36 np0005541603 dracut[1271]: *** Including module: kdumpbase ***
Dec  1 16:28:36 np0005541603 dracut[1271]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  1 16:28:36 np0005541603 dracut[1271]:  microcode_ctl module: mangling fw_dir
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel" is ignored
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  1 16:28:36 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  1 16:28:37 np0005541603 dracut[1271]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  1 16:28:37 np0005541603 dracut[1271]: *** Including module: openssl ***
Dec  1 16:28:37 np0005541603 dracut[1271]: *** Including module: shutdown ***
Dec  1 16:28:37 np0005541603 dracut[1271]: *** Including module: squash ***
Dec  1 16:28:37 np0005541603 dracut[1271]: *** Including modules done ***
Dec  1 16:28:37 np0005541603 dracut[1271]: *** Installing kernel module dependencies ***
Dec  1 16:28:38 np0005541603 dracut[1271]: *** Installing kernel module dependencies done ***
Dec  1 16:28:38 np0005541603 dracut[1271]: *** Resolving executable dependencies ***
Dec  1 16:28:40 np0005541603 dracut[1271]: *** Resolving executable dependencies done ***
Dec  1 16:28:40 np0005541603 dracut[1271]: *** Generating early-microcode cpio image ***
Dec  1 16:28:40 np0005541603 dracut[1271]: *** Store current command line parameters ***
Dec  1 16:28:40 np0005541603 dracut[1271]: Stored kernel commandline:
Dec  1 16:28:40 np0005541603 dracut[1271]: No dracut internal kernel commandline stored in the initramfs
Dec  1 16:28:41 np0005541603 dracut[1271]: *** Install squash loader ***
Dec  1 16:28:42 np0005541603 dracut[1271]: *** Squashing the files inside the initramfs ***
Dec  1 16:28:43 np0005541603 dracut[1271]: *** Squashing the files inside the initramfs done ***
Dec  1 16:28:43 np0005541603 dracut[1271]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  1 16:28:43 np0005541603 dracut[1271]: *** Hardlinking files ***
Dec  1 16:28:43 np0005541603 dracut[1271]: *** Hardlinking files done ***
Dec  1 16:28:43 np0005541603 dracut[1271]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  1 16:28:44 np0005541603 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Dec  1 16:28:44 np0005541603 kdumpctl[1017]: kdump: Starting kdump: [OK]
Dec  1 16:28:44 np0005541603 systemd[1]: Finished Crash recovery kernel arming.
Dec  1 16:28:44 np0005541603 systemd[1]: Startup finished in 1.666s (kernel) + 2.828s (initrd) + 29.924s (userspace) = 34.419s.
Dec  1 16:28:47 np0005541603 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 16:29:28 np0005541603 chronyd[795]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Dec  1 16:30:00 np0005541603 systemd[1]: Created slice User Slice of UID 1000.
Dec  1 16:30:00 np0005541603 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  1 16:30:00 np0005541603 systemd-logind[788]: New session 1 of user zuul.
Dec  1 16:30:00 np0005541603 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  1 16:30:00 np0005541603 systemd[1]: Starting User Manager for UID 1000...
Dec  1 16:30:00 np0005541603 systemd[4306]: Queued start job for default target Main User Target.
Dec  1 16:30:00 np0005541603 systemd[4306]: Created slice User Application Slice.
Dec  1 16:30:00 np0005541603 systemd[4306]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  1 16:30:00 np0005541603 systemd[4306]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 16:30:00 np0005541603 systemd[4306]: Reached target Paths.
Dec  1 16:30:00 np0005541603 systemd[4306]: Reached target Timers.
Dec  1 16:30:00 np0005541603 systemd[4306]: Starting D-Bus User Message Bus Socket...
Dec  1 16:30:00 np0005541603 systemd[4306]: Starting Create User's Volatile Files and Directories...
Dec  1 16:30:00 np0005541603 systemd[4306]: Listening on D-Bus User Message Bus Socket.
Dec  1 16:30:00 np0005541603 systemd[4306]: Reached target Sockets.
Dec  1 16:30:00 np0005541603 systemd[4306]: Finished Create User's Volatile Files and Directories.
Dec  1 16:30:00 np0005541603 systemd[4306]: Reached target Basic System.
Dec  1 16:30:00 np0005541603 systemd[4306]: Reached target Main User Target.
Dec  1 16:30:00 np0005541603 systemd[4306]: Startup finished in 128ms.
Dec  1 16:30:00 np0005541603 systemd[1]: Started User Manager for UID 1000.
Dec  1 16:30:00 np0005541603 systemd[1]: Started Session 1 of User zuul.
Dec  1 16:30:01 np0005541603 python3[4388]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 16:30:04 np0005541603 python3[4416]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 16:30:10 np0005541603 python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 16:30:11 np0005541603 python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  1 16:30:13 np0005541603 python3[4540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC54fDyBee4UM7R9BPqhJSYMY5MZB5R1V2DfLp8yG1fwTNEPoUDVAq8ltIYKZdntpeN31sewBurclELUOguoJ0Gdz2KGlzqWij0Qu4Fo95JXmyqKBkSCUB+N1Vx8lgu6R7His0ONZU7IDltt3NhuvTGYhKUYC9aZF/IesLeuEazK/3JGZFPl6ym7XNPkv+txviPH9xEp+34Sw+DMvP0m5FXDH0wfvMejAFMQrnB4OH3+bhtmVKFLCWMnpnU3+G2pDexE3kqQrpA3RKAkztEtnBfTFtY8a0Ozx8W3ZOofBtqfd/1Byp8+tuSAadfeRYcj0F6UW3Mlm8CBFiB/Ovfeaawp2PCrsuJBQPjAFrStd4sdSOiZwJRAOW7gIzAUtSZ4bwl2n+ABzieoiL31lFM4kcMdowtHLGfFdaeNKlE/UINdVeKsSFpQlMjvq2oAEQpeM5T9JAdS0M4Eeu1R2aLUQDJxeUm4cp5M7VJZqEJuvwWRjLZITZ+YILsLuBqi0meZ7U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:13 np0005541603 python3[4564]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:14 np0005541603 python3[4663]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:30:14 np0005541603 python3[4734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764624614.1960077-207-163626695348204/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=4ba83aa3ef144c608a6843d408b87526_id_rsa follow=False checksum=488c70873d455ab8685f804a035cbe8a8cb34698 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:15 np0005541603 python3[4857]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:30:15 np0005541603 python3[4928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764624615.169489-240-197534596807106/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=4ba83aa3ef144c608a6843d408b87526_id_rsa.pub follow=False checksum=24353f99a57156a51758723e5d88ea7495fc62b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:17 np0005541603 python3[4976]: ansible-ping Invoked with data=pong
Dec  1 16:30:18 np0005541603 python3[5000]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 16:30:20 np0005541603 python3[5058]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  1 16:30:21 np0005541603 python3[5090]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:21 np0005541603 python3[5114]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:21 np0005541603 python3[5138]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:21 np0005541603 python3[5162]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:22 np0005541603 python3[5186]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:22 np0005541603 python3[5210]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:24 np0005541603 python3[5236]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:24 np0005541603 python3[5314]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:30:25 np0005541603 python3[5387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764624624.1422164-21-207570723776192/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:25 np0005541603 python3[5435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:26 np0005541603 python3[5459]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:26 np0005541603 python3[5483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:26 np0005541603 python3[5507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:26 np0005541603 python3[5531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:27 np0005541603 python3[5555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:27 np0005541603 python3[5579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:27 np0005541603 python3[5603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:28 np0005541603 python3[5627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:28 np0005541603 python3[5651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:28 np0005541603 python3[5675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:28 np0005541603 python3[5699]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:29 np0005541603 python3[5723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:29 np0005541603 python3[5747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:29 np0005541603 python3[5771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:30 np0005541603 python3[5795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:30 np0005541603 python3[5819]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:30 np0005541603 python3[5843]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:31 np0005541603 python3[5867]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:31 np0005541603 python3[5891]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:31 np0005541603 python3[5915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:32 np0005541603 python3[5939]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:32 np0005541603 python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:32 np0005541603 python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:33 np0005541603 python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:33 np0005541603 python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:30:36 np0005541603 python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 16:30:36 np0005541603 systemd[1]: Starting Time & Date Service...
Dec  1 16:30:36 np0005541603 systemd[1]: Started Time & Date Service.
Dec  1 16:30:36 np0005541603 systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Dec  1 16:30:36 np0005541603 python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:37 np0005541603 python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:30:37 np0005541603 python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764624636.8544395-153-212106123956805/source _original_basename=tmp90y045f4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:38 np0005541603 python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:30:38 np0005541603 python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764624637.75814-183-221866767314657/source _original_basename=tmp6whdytaz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:39 np0005541603 python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:30:39 np0005541603 python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764624638.9699545-231-254013574755214/source _original_basename=tmpr_kbdep2 follow=False checksum=6c462e10cf6b935fb22f4386c31d576dcf4d4133 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:40 np0005541603 python3[6633]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:30:40 np0005541603 python3[6659]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:30:41 np0005541603 python3[6739]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:30:41 np0005541603 python3[6812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764624640.8394005-273-119069721216998/source _original_basename=tmpia8hjj7f follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:30:42 np0005541603 python3[6863]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-397e-2641-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:30:42 np0005541603 python3[6891]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-397e-2641-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  1 16:30:44 np0005541603 python3[6919]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:31:01 np0005541603 python3[6945]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:31:06 np0005541603 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  1 16:31:33 np0005541603 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  1 16:31:33 np0005541603 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8438] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 16:31:33 np0005541603 systemd-udevd[6949]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8693] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8739] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8745] device (eth1): carrier: link connected
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8748] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8759] policy: auto-activating connection 'Wired connection 1' (5c4fb02b-17b8-3fe0-8f9e-dc676dff4023)
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8766] device (eth1): Activation: starting connection 'Wired connection 1' (5c4fb02b-17b8-3fe0-8f9e-dc676dff4023)
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8768] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8772] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8780] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 16:31:33 np0005541603 NetworkManager[860]: <info>  [1764624693.8787] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 16:31:34 np0005541603 python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-91ac-29ad-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:31:42 np0005541603 python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:31:42 np0005541603 python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764624701.646807-102-71529969725778/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=6b772a8d71721790353599206b0e1c89187241cb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:31:43 np0005541603 python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 16:31:43 np0005541603 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 16:31:43 np0005541603 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 16:31:43 np0005541603 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 16:31:43 np0005541603 systemd[1]: Stopping Network Manager...
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4791] caught SIGTERM, shutting down normally.
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4805] dhcp4 (eth0): canceled DHCP transaction
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4806] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4806] dhcp4 (eth0): state changed no lease
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4809] manager: NetworkManager state is now CONNECTING
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4893] dhcp4 (eth1): canceled DHCP transaction
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4894] dhcp4 (eth1): state changed no lease
Dec  1 16:31:43 np0005541603 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 16:31:43 np0005541603 NetworkManager[860]: <info>  [1764624703.4960] exiting (success)
Dec  1 16:31:43 np0005541603 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 16:31:43 np0005541603 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 16:31:43 np0005541603 systemd[1]: Stopped Network Manager.
Dec  1 16:31:43 np0005541603 systemd[1]: NetworkManager.service: Consumed 1.736s CPU time, 10.2M memory peak.
Dec  1 16:31:43 np0005541603 systemd[1]: Starting Network Manager...
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.5690] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7a82d3c7-3900-45d2-a5fc-f942d952501d)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.5696] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.5801] manager[0x55597c847070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 16:31:43 np0005541603 systemd[1]: Starting Hostname Service...
Dec  1 16:31:43 np0005541603 systemd[1]: Started Hostname Service.
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.6995] hostname: hostname: using hostnamed
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7000] hostname: static hostname changed from (none) to "np0005541603.novalocal"
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7008] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7016] manager[0x55597c847070]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7016] manager[0x55597c847070]: rfkill: WWAN hardware radio set enabled
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7069] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7070] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7071] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7072] manager: Networking is enabled by state file
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7080] settings: Loaded settings plugin: keyfile (internal)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7087] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7133] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7150] dhcp: init: Using DHCP client 'internal'
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7155] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7164] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7172] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7185] device (lo): Activation: starting connection 'lo' (6817b782-5092-4502-b86b-5365c44c46c2)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7198] device (eth0): carrier: link connected
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7206] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7214] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7215] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7224] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7237] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7247] device (eth1): carrier: link connected
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7254] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7261] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5c4fb02b-17b8-3fe0-8f9e-dc676dff4023) (indicated)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7262] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7269] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7282] device (eth1): Activation: starting connection 'Wired connection 1' (5c4fb02b-17b8-3fe0-8f9e-dc676dff4023)
Dec  1 16:31:43 np0005541603 systemd[1]: Started Network Manager.
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7292] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7309] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7315] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7320] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7326] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7334] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7338] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7344] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7349] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7362] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7368] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7386] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7392] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7420] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7429] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7440] device (lo): Activation: successful, device activated.
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7455] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7469] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 16:31:43 np0005541603 systemd[1]: Starting Network Manager Wait Online...
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7556] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7596] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7599] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7606] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7614] device (eth0): Activation: successful, device activated.
Dec  1 16:31:43 np0005541603 NetworkManager[7186]: <info>  [1764624703.7622] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 16:31:44 np0005541603 python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-91ac-29ad-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:31:53 np0005541603 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 16:32:13 np0005541603 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.3766] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 16:32:29 np0005541603 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 16:32:29 np0005541603 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4118] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4126] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4150] device (eth1): Activation: successful, device activated.
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4167] manager: startup complete
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4172] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <warn>  [1764624749.4189] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4209] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  1 16:32:29 np0005541603 systemd[1]: Finished Network Manager Wait Online.
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4353] dhcp4 (eth1): canceled DHCP transaction
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4354] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4354] dhcp4 (eth1): state changed no lease
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4388] policy: auto-activating connection 'ci-private-network' (a06cac8d-0534-52c1-8613-26ec75623b46)
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4397] device (eth1): Activation: starting connection 'ci-private-network' (a06cac8d-0534-52c1-8613-26ec75623b46)
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4399] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4409] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4422] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4439] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4488] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4493] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 16:32:29 np0005541603 NetworkManager[7186]: <info>  [1764624749.4511] device (eth1): Activation: successful, device activated.
Dec  1 16:32:32 np0005541603 systemd[4306]: Starting Mark boot as successful...
Dec  1 16:32:32 np0005541603 systemd[4306]: Finished Mark boot as successful.
Dec  1 16:32:37 np0005541603 python3[7369]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:32:37 np0005541603 python3[7442]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764624756.6666322-259-189975466322435/source _original_basename=tmpc52gg17x follow=False checksum=6bf3356d3c3f4ae18e2f89b8822eb6bfe7d75df4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:32:39 np0005541603 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 16:33:37 np0005541603 systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Dec  1 16:35:32 np0005541603 systemd[4306]: Created slice User Background Tasks Slice.
Dec  1 16:35:32 np0005541603 systemd[4306]: Starting Cleanup of User's Temporary Files and Directories...
Dec  1 16:35:32 np0005541603 systemd[4306]: Finished Cleanup of User's Temporary Files and Directories.
Dec  1 16:37:02 np0005541603 chronyd[795]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Dec  1 16:37:36 np0005541603 systemd-logind[788]: New session 3 of user zuul.
Dec  1 16:37:36 np0005541603 systemd[1]: Started Session 3 of User zuul.
Dec  1 16:37:37 np0005541603 python3[7501]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8ecf-5425-000000001cda-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:37:37 np0005541603 python3[7530]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:37:38 np0005541603 python3[7556]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:37:38 np0005541603 python3[7582]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:37:38 np0005541603 python3[7608]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:37:39 np0005541603 python3[7634]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:37:39 np0005541603 python3[7712]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:37:40 np0005541603 python3[7785]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764625059.4087384-483-248729403142554/source _original_basename=tmpditxupl_ follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:37:41 np0005541603 python3[7835]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 16:37:41 np0005541603 systemd[1]: Reloading.
Dec  1 16:37:41 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 16:37:42 np0005541603 python3[7891]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  1 16:37:43 np0005541603 python3[7917]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:37:43 np0005541603 python3[7945]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:37:43 np0005541603 python3[7973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:37:44 np0005541603 python3[8001]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:37:44 np0005541603 python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8ecf-5425-000000001ce1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:37:45 np0005541603 python3[8058]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 16:37:46 np0005541603 systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Dec  1 16:37:46 np0005541603 systemd[1]: session-3.scope: Deactivated successfully.
Dec  1 16:37:46 np0005541603 systemd[1]: session-3.scope: Consumed 4.736s CPU time.
Dec  1 16:37:46 np0005541603 systemd-logind[788]: Removed session 3.
Dec  1 16:37:48 np0005541603 systemd-logind[788]: New session 4 of user zuul.
Dec  1 16:37:48 np0005541603 systemd[1]: Started Session 4 of User zuul.
Dec  1 16:37:48 np0005541603 python3[8093]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  1 16:38:03 np0005541603 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 16:38:03 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 16:38:03 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 16:38:03 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 16:38:03 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 16:38:03 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 16:38:03 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 16:38:03 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 16:38:12 np0005541603 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 16:38:12 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 16:38:12 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 16:38:12 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 16:38:12 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 16:38:12 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 16:38:12 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 16:38:12 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 16:38:21 np0005541603 kernel: SELinux:  Converting 385 SID table entries...
Dec  1 16:38:21 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 16:38:21 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 16:38:21 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 16:38:21 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 16:38:21 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 16:38:21 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 16:38:21 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 16:38:22 np0005541603 setsebool[8156]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  1 16:38:22 np0005541603 setsebool[8156]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  1 16:38:33 np0005541603 kernel: SELinux:  Converting 388 SID table entries...
Dec  1 16:38:33 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 16:38:33 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 16:38:33 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 16:38:33 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 16:38:33 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 16:38:33 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 16:38:33 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 16:38:51 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 16:38:51 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 16:38:51 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 16:38:51 np0005541603 systemd[1]: Reloading.
Dec  1 16:38:51 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 16:38:52 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 16:38:54 np0005541603 python3[10263]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-e72d-2283-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:38:54 np0005541603 kernel: evm: overlay not supported
Dec  1 16:38:54 np0005541603 systemd[4306]: Starting D-Bus User Message Bus...
Dec  1 16:38:54 np0005541603 dbus-broker-launch[11443]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  1 16:38:54 np0005541603 dbus-broker-launch[11443]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  1 16:38:54 np0005541603 systemd[4306]: Started D-Bus User Message Bus.
Dec  1 16:38:54 np0005541603 dbus-broker-lau[11443]: Ready
Dec  1 16:38:54 np0005541603 systemd[4306]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 16:38:54 np0005541603 systemd[4306]: Created slice Slice /user.
Dec  1 16:38:54 np0005541603 systemd[4306]: podman-11292.scope: unit configures an IP firewall, but not running as root.
Dec  1 16:38:54 np0005541603 systemd[4306]: (This warning is only shown for the first unit using IP firewalling.)
Dec  1 16:38:54 np0005541603 systemd[4306]: Started podman-11292.scope.
Dec  1 16:38:55 np0005541603 systemd[4306]: Started podman-pause-df602f5d.scope.
Dec  1 16:38:55 np0005541603 python3[12144]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.107:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.107:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:38:55 np0005541603 python3[12144]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  1 16:38:56 np0005541603 systemd[1]: session-4.scope: Deactivated successfully.
Dec  1 16:38:56 np0005541603 systemd[1]: session-4.scope: Consumed 59.737s CPU time.
Dec  1 16:38:56 np0005541603 systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Dec  1 16:38:56 np0005541603 systemd-logind[788]: Removed session 4.
Dec  1 16:39:21 np0005541603 systemd-logind[788]: New session 5 of user zuul.
Dec  1 16:39:21 np0005541603 systemd[1]: Started Session 5 of User zuul.
Dec  1 16:39:21 np0005541603 python3[22003]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBbCZeXrBxi9DXK6OzBLenBbIhaarnXX98LsCzy8wF0HZoRSSitr3dzKPeZ6LIAC/1UigpTaCDjjH2nsnDjqzyE= zuul@np0005541602.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:39:21 np0005541603 python3[22088]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBbCZeXrBxi9DXK6OzBLenBbIhaarnXX98LsCzy8wF0HZoRSSitr3dzKPeZ6LIAC/1UigpTaCDjjH2nsnDjqzyE= zuul@np0005541602.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:39:22 np0005541603 python3[22433]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541603.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  1 16:39:24 np0005541603 python3[23122]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBbCZeXrBxi9DXK6OzBLenBbIhaarnXX98LsCzy8wF0HZoRSSitr3dzKPeZ6LIAC/1UigpTaCDjjH2nsnDjqzyE= zuul@np0005541602.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 16:39:25 np0005541603 python3[23388]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:39:25 np0005541603 python3[23672]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764625164.9788527-135-49506169398353/source _original_basename=tmphhm6q7mb follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:39:26 np0005541603 python3[23996]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  1 16:39:26 np0005541603 systemd[1]: Starting Hostname Service...
Dec  1 16:39:26 np0005541603 systemd[1]: Started Hostname Service.
Dec  1 16:39:26 np0005541603 systemd-hostnamed[24103]: Changed pretty hostname to 'compute-0'
Dec  1 16:39:26 np0005541603 systemd-hostnamed[24103]: Hostname set to <compute-0> (static)
Dec  1 16:39:26 np0005541603 NetworkManager[7186]: <info>  [1764625166.8870] hostname: static hostname changed from "np0005541603.novalocal" to "compute-0"
Dec  1 16:39:26 np0005541603 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 16:39:26 np0005541603 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 16:39:27 np0005541603 systemd[1]: session-5.scope: Deactivated successfully.
Dec  1 16:39:27 np0005541603 systemd[1]: session-5.scope: Consumed 2.580s CPU time.
Dec  1 16:39:27 np0005541603 systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Dec  1 16:39:27 np0005541603 systemd-logind[788]: Removed session 5.
Dec  1 16:39:36 np0005541603 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 16:39:37 np0005541603 irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  1 16:39:37 np0005541603 irqbalance[782]: IRQ 27 affinity is now unmanaged
Dec  1 16:39:44 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 16:39:44 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 16:39:44 np0005541603 systemd[1]: man-db-cache-update.service: Consumed 1min 3.471s CPU time.
Dec  1 16:39:44 np0005541603 systemd[1]: run-rf05e0ff9c22c496e95d9dc33553562db.service: Deactivated successfully.
Dec  1 16:39:56 np0005541603 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 16:42:22 np0005541603 systemd[1]: Starting dnf makecache...
Dec  1 16:42:22 np0005541603 dnf[29920]: Failed determining last makecache time.
Dec  1 16:42:23 np0005541603 dnf[29920]: CentOS Stream 9 - BaseOS                         30 kB/s | 7.3 kB     00:00
Dec  1 16:42:23 np0005541603 dnf[29920]: CentOS Stream 9 - AppStream                      32 kB/s | 7.4 kB     00:00
Dec  1 16:42:23 np0005541603 dnf[29920]: CentOS Stream 9 - CRB                            26 kB/s | 7.2 kB     00:00
Dec  1 16:42:24 np0005541603 dnf[29920]: CentOS Stream 9 - Extras packages                75 kB/s | 8.3 kB     00:00
Dec  1 16:42:24 np0005541603 dnf[29920]: Metadata cache created.
Dec  1 16:42:24 np0005541603 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  1 16:42:24 np0005541603 systemd[1]: Finished dnf makecache.
Dec  1 16:43:32 np0005541603 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  1 16:43:32 np0005541603 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  1 16:43:32 np0005541603 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  1 16:43:32 np0005541603 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  1 16:44:15 np0005541603 systemd-logind[788]: New session 6 of user zuul.
Dec  1 16:44:15 np0005541603 systemd[1]: Started Session 6 of User zuul.
Dec  1 16:44:16 np0005541603 python3[30005]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 16:44:17 np0005541603 python3[30121]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:44:18 np0005541603 python3[30194]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764625457.581091-33581-77156650654496/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:44:18 np0005541603 python3[30220]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:44:18 np0005541603 python3[30293]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764625457.581091-33581-77156650654496/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:44:19 np0005541603 python3[30319]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:44:19 np0005541603 python3[30392]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764625457.581091-33581-77156650654496/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:44:19 np0005541603 python3[30418]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:44:20 np0005541603 python3[30491]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764625457.581091-33581-77156650654496/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:44:20 np0005541603 python3[30519]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:44:20 np0005541603 python3[30592]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764625457.581091-33581-77156650654496/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:44:21 np0005541603 python3[30618]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:44:21 np0005541603 python3[30691]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764625457.581091-33581-77156650654496/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:44:22 np0005541603 python3[30717]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 16:44:22 np0005541603 python3[30790]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764625457.581091-33581-77156650654496/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 16:47:10 np0005541603 python3[30852]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 16:52:10 np0005541603 systemd[1]: session-6.scope: Deactivated successfully.
Dec  1 16:52:10 np0005541603 systemd[1]: session-6.scope: Consumed 5.195s CPU time.
Dec  1 16:52:10 np0005541603 systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Dec  1 16:52:10 np0005541603 systemd-logind[788]: Removed session 6.
Dec  1 16:59:52 np0005541603 systemd-logind[788]: New session 7 of user zuul.
Dec  1 16:59:52 np0005541603 systemd[1]: Started Session 7 of User zuul.
Dec  1 16:59:53 np0005541603 python3.9[31048]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 16:59:55 np0005541603 python3.9[31229]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:00:02 np0005541603 systemd[1]: session-7.scope: Deactivated successfully.
Dec  1 17:00:02 np0005541603 systemd[1]: session-7.scope: Consumed 7.810s CPU time.
Dec  1 17:00:02 np0005541603 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Dec  1 17:00:02 np0005541603 systemd-logind[788]: Removed session 7.
Dec  1 17:00:07 np0005541603 systemd-logind[788]: New session 8 of user zuul.
Dec  1 17:00:07 np0005541603 systemd[1]: Started Session 8 of User zuul.
Dec  1 17:00:09 np0005541603 python3.9[31444]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:00:10 np0005541603 systemd[1]: session-8.scope: Deactivated successfully.
Dec  1 17:00:10 np0005541603 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Dec  1 17:00:10 np0005541603 systemd-logind[788]: Removed session 8.
Dec  1 17:00:28 np0005541603 systemd-logind[788]: New session 9 of user zuul.
Dec  1 17:00:28 np0005541603 systemd[1]: Started Session 9 of User zuul.
Dec  1 17:00:29 np0005541603 python3.9[31628]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  1 17:00:31 np0005541603 python3.9[31802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:00:32 np0005541603 python3.9[31954]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:00:33 np0005541603 python3.9[32107]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:00:33 np0005541603 python3.9[32259]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:00:34 np0005541603 python3.9[32411]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:00:35 np0005541603 python3.9[32534]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626434.1767852-73-277575397968081/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:00:36 np0005541603 python3.9[32686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:00:37 np0005541603 python3.9[32842]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:00:38 np0005541603 python3.9[32994]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:00:39 np0005541603 python3.9[33144]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:00:44 np0005541603 python3.9[33397]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:00:45 np0005541603 python3.9[33547]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:00:46 np0005541603 python3.9[33701]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:00:47 np0005541603 python3.9[33859]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:00:48 np0005541603 python3.9[33943]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:02:02 np0005541603 systemd[1]: Reloading.
Dec  1 17:02:03 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:02:03 np0005541603 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  1 17:02:03 np0005541603 systemd[1]: Reloading.
Dec  1 17:02:03 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:02:03 np0005541603 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  1 17:02:03 np0005541603 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  1 17:02:03 np0005541603 systemd[1]: Reloading.
Dec  1 17:02:03 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:02:04 np0005541603 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  1 17:02:04 np0005541603 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  1 17:02:04 np0005541603 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  1 17:02:04 np0005541603 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  1 17:03:11 np0005541603 kernel: SELinux:  Converting 2718 SID table entries...
Dec  1 17:03:11 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 17:03:11 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 17:03:11 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 17:03:11 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 17:03:11 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 17:03:11 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 17:03:11 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 17:03:11 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  1 17:03:11 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 17:03:12 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 17:03:12 np0005541603 systemd[1]: Reloading.
Dec  1 17:03:12 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:03:12 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 17:03:13 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 17:03:13 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 17:03:13 np0005541603 systemd[1]: man-db-cache-update.service: Consumed 1.470s CPU time.
Dec  1 17:03:13 np0005541603 systemd[1]: run-r4a3b91b0177b4ab39d828e0b3fd2eb95.service: Deactivated successfully.
Dec  1 17:03:13 np0005541603 python3.9[35494]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:03:15 np0005541603 python3.9[35777]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  1 17:03:16 np0005541603 python3.9[35929]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  1 17:03:19 np0005541603 python3.9[36082]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:03:20 np0005541603 python3.9[36234]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  1 17:03:21 np0005541603 python3.9[36386]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:03:22 np0005541603 python3.9[36538]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:03:25 np0005541603 python3.9[36661]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626602.1183293-236-223440381599835/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:03:28 np0005541603 python3.9[36813]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:03:29 np0005541603 python3.9[36965]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:03:30 np0005541603 python3.9[37118]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:03:31 np0005541603 python3.9[37270]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  1 17:03:31 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:03:32 np0005541603 python3.9[37424]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 17:03:34 np0005541603 python3.9[37582]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 17:03:35 np0005541603 python3.9[37742]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  1 17:03:35 np0005541603 python3.9[37895]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 17:03:36 np0005541603 python3.9[38053]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  1 17:03:37 np0005541603 python3.9[38205]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:03:40 np0005541603 python3.9[38358]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:03:41 np0005541603 python3.9[38510]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:03:42 np0005541603 python3.9[38633]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764626620.8230147-355-185452598345886/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:03:43 np0005541603 python3.9[38785]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:03:43 np0005541603 systemd[1]: Starting Load Kernel Modules...
Dec  1 17:03:43 np0005541603 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  1 17:03:43 np0005541603 kernel: Bridge firewalling registered
Dec  1 17:03:43 np0005541603 systemd-modules-load[38789]: Inserted module 'br_netfilter'
Dec  1 17:03:43 np0005541603 systemd[1]: Finished Load Kernel Modules.
Dec  1 17:03:44 np0005541603 python3.9[38946]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:03:45 np0005541603 python3.9[39069]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764626623.7340236-378-175805048468096/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:03:46 np0005541603 python3.9[39221]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:03:49 np0005541603 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  1 17:03:49 np0005541603 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  1 17:03:50 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 17:03:50 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 17:03:50 np0005541603 systemd[1]: Reloading.
Dec  1 17:03:50 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:03:50 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 17:03:52 np0005541603 python3.9[40486]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:03:53 np0005541603 python3.9[41336]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  1 17:03:53 np0005541603 python3.9[42099]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:03:54 np0005541603 python3.9[42951]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:03:55 np0005541603 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 17:03:55 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 17:03:55 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 17:03:55 np0005541603 systemd[1]: man-db-cache-update.service: Consumed 6.438s CPU time.
Dec  1 17:03:55 np0005541603 systemd[1]: run-r8c97abde767c44dfb080d172f3296071.service: Deactivated successfully.
Dec  1 17:03:55 np0005541603 systemd[1]: Starting Authorization Manager...
Dec  1 17:03:55 np0005541603 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 17:03:55 np0005541603 polkitd[43601]: Started polkitd version 0.117
Dec  1 17:03:55 np0005541603 systemd[1]: Started Authorization Manager.
Dec  1 17:03:56 np0005541603 python3.9[43771]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:03:56 np0005541603 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  1 17:03:57 np0005541603 systemd[1]: tuned.service: Deactivated successfully.
Dec  1 17:03:57 np0005541603 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  1 17:03:57 np0005541603 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 17:03:57 np0005541603 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 17:03:57 np0005541603 python3.9[43932]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  1 17:04:00 np0005541603 python3.9[44084]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:04:00 np0005541603 systemd[1]: Reloading.
Dec  1 17:04:00 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:04:01 np0005541603 python3.9[44273]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:04:01 np0005541603 systemd[1]: Reloading.
Dec  1 17:04:01 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:04:03 np0005541603 python3.9[44465]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:04:03 np0005541603 python3.9[44618]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:04:03 np0005541603 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  1 17:04:04 np0005541603 python3.9[44771]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:04:07 np0005541603 python3.9[44933]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:04:08 np0005541603 python3.9[45086]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:04:08 np0005541603 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 17:04:08 np0005541603 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 17:04:08 np0005541603 systemd[1]: Stopping Apply Kernel Variables...
Dec  1 17:04:08 np0005541603 systemd[1]: Starting Apply Kernel Variables...
Dec  1 17:04:08 np0005541603 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 17:04:08 np0005541603 systemd[1]: Finished Apply Kernel Variables.
Dec  1 17:04:08 np0005541603 systemd[1]: session-9.scope: Deactivated successfully.
Dec  1 17:04:08 np0005541603 systemd[1]: session-9.scope: Consumed 2min 27.955s CPU time.
Dec  1 17:04:08 np0005541603 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Dec  1 17:04:08 np0005541603 systemd-logind[788]: Removed session 9.
Dec  1 17:04:14 np0005541603 systemd-logind[788]: New session 10 of user zuul.
Dec  1 17:04:14 np0005541603 systemd[1]: Started Session 10 of User zuul.
Dec  1 17:04:15 np0005541603 python3.9[45269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:04:17 np0005541603 python3.9[45423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:04:18 np0005541603 python3.9[45579]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:04:19 np0005541603 python3.9[45730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:04:20 np0005541603 python3.9[45886]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:04:21 np0005541603 python3.9[45970]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:04:24 np0005541603 python3.9[46125]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:04:25 np0005541603 python3.9[46296]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:04:26 np0005541603 python3.9[46448]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:04:26 np0005541603 systemd[1]: var-lib-containers-storage-overlay-compat1055437015-merged.mount: Deactivated successfully.
Dec  1 17:04:26 np0005541603 podman[46449]: 2025-12-01 22:04:26.11300849 +0000 UTC m=+0.078418038 system refresh
Dec  1 17:04:27 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:04:27 np0005541603 python3.9[46612]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:04:27 np0005541603 python3.9[46735]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626666.3737009-109-251775096002349/.source.json follow=False _original_basename=podman_network_config.j2 checksum=f950594fe5714d59b0ce919aacda782a6574dbee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:04:28 np0005541603 python3.9[46887]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:04:29 np0005541603 python3.9[47010]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764626668.1037824-124-16010957762938/.source.conf follow=False _original_basename=registries.conf.j2 checksum=bd8960d09011f95ec8946d00609d580926fa47cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:04:30 np0005541603 python3.9[47164]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:04:31 np0005541603 python3.9[47316]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:04:31 np0005541603 python3.9[47468]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:04:32 np0005541603 python3.9[47620]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:04:33 np0005541603 python3.9[47772]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:04:34 np0005541603 python3.9[47926]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:04:36 np0005541603 python3.9[48079]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:04:39 np0005541603 python3.9[48239]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:04:41 np0005541603 python3.9[48392]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:04:44 np0005541603 python3.9[48545]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:04:46 np0005541603 python3.9[48701]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:04:51 np0005541603 python3.9[48872]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:04:53 np0005541603 python3.9[49025]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:05:09 np0005541603 python3.9[49366]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:05:12 np0005541603 python3.9[49522]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:05:12 np0005541603 python3.9[49697]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:05:13 np0005541603 python3.9[49820]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764626712.2920516-272-31298136677337/.source.json _original_basename=.8z07qpq0 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:05:14 np0005541603 python3.9[49972]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:05:14 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:17 np0005541603 systemd[1]: var-lib-containers-storage-overlay-compat2223537535-lower\x2dmapped.mount: Deactivated successfully.
Dec  1 17:05:20 np0005541603 podman[49984]: 2025-12-01 22:05:20.727155706 +0000 UTC m=+5.877943177 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 17:05:20 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:20 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:20 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:21 np0005541603 python3.9[50278]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:05:22 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:32 np0005541603 podman[50290]: 2025-12-01 22:05:32.715546579 +0000 UTC m=+10.686176338 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 17:05:32 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:32 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:32 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:34 np0005541603 python3.9[50587]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:05:34 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:35 np0005541603 podman[50599]: 2025-12-01 22:05:35.232898201 +0000 UTC m=+1.069380065 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 17:05:35 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:35 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:35 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:36 np0005541603 python3.9[50835]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:05:36 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:50 np0005541603 podman[50848]: 2025-12-01 22:05:50.854557916 +0000 UTC m=+14.277335769 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 17:05:50 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:50 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:51 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:05:54 np0005541603 python3.9[51104]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:05:54 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:13 np0005541603 podman[51117]: 2025-12-01 22:06:13.8846176 +0000 UTC m=+19.382603663 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  1 17:06:13 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:13 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:14 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:15 np0005541603 python3.9[51455]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:06:15 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:16 np0005541603 podman[51467]: 2025-12-01 22:06:16.487711549 +0000 UTC m=+1.368719354 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  1 17:06:16 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:16 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:16 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:17 np0005541603 python3.9[51746]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:06:17 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:21 np0005541603 podman[51758]: 2025-12-01 22:06:21.050920229 +0000 UTC m=+3.235839619 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  1 17:06:21 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:21 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:21 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:22 np0005541603 python3.9[52014]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 17:06:22 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:30 np0005541603 podman[52027]: 2025-12-01 22:06:30.112310066 +0000 UTC m=+7.918799203 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  1 17:06:30 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:30 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:30 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:06:30 np0005541603 systemd[1]: session-10.scope: Deactivated successfully.
Dec  1 17:06:30 np0005541603 systemd[1]: session-10.scope: Consumed 2min 48.788s CPU time.
Dec  1 17:06:30 np0005541603 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Dec  1 17:06:30 np0005541603 systemd-logind[788]: Removed session 10.
Dec  1 17:06:36 np0005541603 systemd-logind[788]: New session 11 of user zuul.
Dec  1 17:06:36 np0005541603 systemd[1]: Started Session 11 of User zuul.
Dec  1 17:06:38 np0005541603 python3.9[52430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:06:39 np0005541603 python3.9[52586]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  1 17:06:40 np0005541603 python3.9[52739]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 17:06:41 np0005541603 python3.9[52897]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 17:06:42 np0005541603 python3.9[53057]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:06:44 np0005541603 python3.9[53141]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:06:46 np0005541603 python3.9[53303]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:06:59 np0005541603 kernel: SELinux:  Converting 2731 SID table entries...
Dec  1 17:06:59 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 17:06:59 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 17:06:59 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 17:06:59 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 17:06:59 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 17:06:59 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 17:06:59 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 17:07:00 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  1 17:07:00 np0005541603 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  1 17:07:01 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 17:07:01 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 17:07:01 np0005541603 systemd[1]: Reloading.
Dec  1 17:07:01 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:07:01 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:07:02 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 17:07:02 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 17:07:02 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 17:07:02 np0005541603 systemd[1]: run-r83f13eb6bb19406a92b1036190b47445.service: Deactivated successfully.
Dec  1 17:07:03 np0005541603 python3.9[54403]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:07:05 np0005541603 systemd[1]: Reloading.
Dec  1 17:07:05 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:07:05 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:07:05 np0005541603 systemd[1]: Starting Open vSwitch Database Unit...
Dec  1 17:07:05 np0005541603 chown[54445]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  1 17:07:05 np0005541603 ovs-ctl[54450]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  1 17:07:05 np0005541603 ovs-ctl[54450]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  1 17:07:05 np0005541603 ovs-ctl[54450]: Starting ovsdb-server [  OK  ]
Dec  1 17:07:05 np0005541603 ovs-vsctl[54499]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  1 17:07:05 np0005541603 ovs-vsctl[54515]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  1 17:07:05 np0005541603 ovs-ctl[54450]: Configuring Open vSwitch system IDs [  OK  ]
Dec  1 17:07:05 np0005541603 ovs-ctl[54450]: Enabling remote OVSDB managers [  OK  ]
Dec  1 17:07:05 np0005541603 ovs-vsctl[54524]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 17:07:05 np0005541603 systemd[1]: Started Open vSwitch Database Unit.
Dec  1 17:07:05 np0005541603 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  1 17:07:05 np0005541603 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  1 17:07:05 np0005541603 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  1 17:07:05 np0005541603 kernel: openvswitch: Open vSwitch switching datapath
Dec  1 17:07:05 np0005541603 ovs-ctl[54569]: Inserting openvswitch module [  OK  ]
Dec  1 17:07:06 np0005541603 ovs-ctl[54538]: Starting ovs-vswitchd [  OK  ]
Dec  1 17:07:06 np0005541603 ovs-vsctl[54586]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 17:07:06 np0005541603 ovs-ctl[54538]: Enabling remote OVSDB managers [  OK  ]
Dec  1 17:07:06 np0005541603 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  1 17:07:06 np0005541603 systemd[1]: Starting Open vSwitch...
Dec  1 17:07:06 np0005541603 systemd[1]: Finished Open vSwitch.
Dec  1 17:07:07 np0005541603 python3.9[54738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:07:08 np0005541603 python3.9[54890]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  1 17:07:09 np0005541603 kernel: SELinux:  Converting 2745 SID table entries...
Dec  1 17:07:09 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 17:07:09 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 17:07:09 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 17:07:09 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 17:07:09 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 17:07:09 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 17:07:09 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 17:07:10 np0005541603 python3.9[55046]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:07:11 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  1 17:07:11 np0005541603 python3.9[55204]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:07:14 np0005541603 python3.9[55357]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:07:16 np0005541603 python3.9[55644]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 17:07:17 np0005541603 python3.9[55794]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:07:17 np0005541603 python3.9[55948]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:07:19 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 17:07:19 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 17:07:19 np0005541603 systemd[1]: Reloading.
Dec  1 17:07:19 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:07:19 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:07:19 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 17:07:20 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 17:07:20 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 17:07:20 np0005541603 systemd[1]: run-rd464ad68b080413ab695bbc6f3b3a04f.service: Deactivated successfully.
Dec  1 17:07:21 np0005541603 python3.9[56265]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:07:21 np0005541603 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 17:07:21 np0005541603 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 17:07:21 np0005541603 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 17:07:21 np0005541603 systemd[1]: Stopping Network Manager...
Dec  1 17:07:21 np0005541603 NetworkManager[7186]: <info>  [1764626841.4163] caught SIGTERM, shutting down normally.
Dec  1 17:07:21 np0005541603 NetworkManager[7186]: <info>  [1764626841.4183] dhcp4 (eth0): canceled DHCP transaction
Dec  1 17:07:21 np0005541603 NetworkManager[7186]: <info>  [1764626841.4184] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 17:07:21 np0005541603 NetworkManager[7186]: <info>  [1764626841.4185] dhcp4 (eth0): state changed no lease
Dec  1 17:07:21 np0005541603 NetworkManager[7186]: <info>  [1764626841.4187] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 17:07:21 np0005541603 NetworkManager[7186]: <info>  [1764626841.4302] exiting (success)
Dec  1 17:07:21 np0005541603 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 17:07:21 np0005541603 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 17:07:21 np0005541603 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 17:07:21 np0005541603 systemd[1]: Stopped Network Manager.
Dec  1 17:07:21 np0005541603 systemd[1]: NetworkManager.service: Consumed 17.141s CPU time, 4.1M memory peak, read 0B from disk, written 25.5K to disk.
Dec  1 17:07:21 np0005541603 systemd[1]: Starting Network Manager...
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.5284] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7a82d3c7-3900-45d2-a5fc-f942d952501d)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.5285] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.5358] manager[0x558130d33090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 17:07:21 np0005541603 systemd[1]: Starting Hostname Service...
Dec  1 17:07:21 np0005541603 systemd[1]: Started Hostname Service.
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6493] hostname: hostname: using hostnamed
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6494] hostname: static hostname changed from (none) to "compute-0"
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6503] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6513] manager[0x558130d33090]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6514] manager[0x558130d33090]: rfkill: WWAN hardware radio set enabled
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6555] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6571] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6572] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6573] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6574] manager: Networking is enabled by state file
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6578] settings: Loaded settings plugin: keyfile (internal)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6585] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6634] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6650] dhcp: init: Using DHCP client 'internal'
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6656] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6665] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6674] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6688] device (lo): Activation: starting connection 'lo' (6817b782-5092-4502-b86b-5365c44c46c2)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6699] device (eth0): carrier: link connected
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6707] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6716] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6717] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6727] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6737] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6747] device (eth1): carrier: link connected
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6753] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6763] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (a06cac8d-0534-52c1-8613-26ec75623b46) (indicated)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6764] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6773] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6784] device (eth1): Activation: starting connection 'ci-private-network' (a06cac8d-0534-52c1-8613-26ec75623b46)
Dec  1 17:07:21 np0005541603 systemd[1]: Started Network Manager.
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6796] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6816] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6829] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6835] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6842] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6849] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6853] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6857] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6861] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6884] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6891] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6906] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6937] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6954] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6958] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6961] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6967] device (lo): Activation: successful, device activated.
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.6978] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 17:07:21 np0005541603 systemd[1]: Starting Network Manager Wait Online...
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7058] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7063] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7064] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7067] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7071] device (eth1): Activation: successful, device activated.
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7086] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7088] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7091] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7094] device (eth0): Activation: successful, device activated.
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7099] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 17:07:21 np0005541603 NetworkManager[56278]: <info>  [1764626841.7102] manager: startup complete
Dec  1 17:07:21 np0005541603 systemd[1]: Finished Network Manager Wait Online.
Dec  1 17:07:22 np0005541603 python3.9[56492]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:07:27 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 17:07:27 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 17:07:27 np0005541603 systemd[1]: Reloading.
Dec  1 17:07:27 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:07:27 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:07:28 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 17:07:29 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 17:07:29 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 17:07:29 np0005541603 systemd[1]: run-rc58dcfb0179c4adf9d6eec020b453846.service: Deactivated successfully.
Dec  1 17:07:30 np0005541603 python3.9[56954]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:07:31 np0005541603 python3.9[57106]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:31 np0005541603 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 17:07:32 np0005541603 python3.9[57260]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:33 np0005541603 python3.9[57412]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:34 np0005541603 python3.9[57564]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:34 np0005541603 python3.9[57716]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:35 np0005541603 python3.9[57868]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:07:36 np0005541603 python3.9[57991]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626855.1712837-229-214015623360439/.source _original_basename=.ir8f5tl9 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:37 np0005541603 python3.9[58143]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:38 np0005541603 python3.9[58295]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  1 17:07:39 np0005541603 python3.9[58447]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:42 np0005541603 python3.9[58874]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  1 17:07:43 np0005541603 ansible-async_wrapper.py[59049]: Invoked with j94689244919 300 /home/zuul/.ansible/tmp/ansible-tmp-1764626862.5060012-295-177962475642908/AnsiballZ_edpm_os_net_config.py _
Dec  1 17:07:43 np0005541603 ansible-async_wrapper.py[59052]: Starting module and watcher
Dec  1 17:07:43 np0005541603 ansible-async_wrapper.py[59052]: Start watching 59053 (300)
Dec  1 17:07:43 np0005541603 ansible-async_wrapper.py[59053]: Start module (59053)
Dec  1 17:07:43 np0005541603 ansible-async_wrapper.py[59049]: Return async_wrapper task started.
Dec  1 17:07:43 np0005541603 python3.9[59054]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  1 17:07:44 np0005541603 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  1 17:07:44 np0005541603 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  1 17:07:44 np0005541603 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  1 17:07:44 np0005541603 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  1 17:07:44 np0005541603 kernel: cfg80211: failed to load regulatory.db
Dec  1 17:07:45 np0005541603 NetworkManager[56278]: <info>  [1764626865.9234] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59055 uid=0 result="success"
Dec  1 17:07:45 np0005541603 NetworkManager[56278]: <info>  [1764626865.9271] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0225] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0229] audit: op="connection-add" uuid="a8bcac38-77d2-49fb-9ff3-5b47df550edb" name="br-ex-br" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0255] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0258] audit: op="connection-add" uuid="6d114aa9-ce9d-4a2b-bd67-b2bf6a555b1c" name="br-ex-port" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0279] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0283] audit: op="connection-add" uuid="89656e89-172e-40b6-8895-1641c2f58ab7" name="eth1-port" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0304] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0307] audit: op="connection-add" uuid="23e9420a-37b1-4c25-80fd-f817dbf39615" name="vlan20-port" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0329] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0332] audit: op="connection-add" uuid="6e2e7f05-9ecd-4755-8bbe-3b71d7d93a8e" name="vlan21-port" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0355] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0358] audit: op="connection-add" uuid="a09d5a09-27f2-4403-a60d-34c9a44db1b3" name="vlan22-port" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0395] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0425] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0428] audit: op="connection-add" uuid="d38a8a87-1c4d-4b14-90c5-c43dc66b8d85" name="br-ex-if" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0493] audit: op="connection-update" uuid="a06cac8d-0534-52c1-8613-26ec75623b46" name="ci-private-network" args="connection.slave-type,connection.timestamp,connection.controller,connection.master,connection.port-type,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.routing-rules,ovs-interface.type,ovs-external-ids.data" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0524] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0527] audit: op="connection-add" uuid="0723a6bf-9c7a-46be-9229-45bac9601899" name="vlan20-if" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0557] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0561] audit: op="connection-add" uuid="351ed789-6090-4cd5-bedb-9c8528e07f80" name="vlan21-if" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0591] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0595] audit: op="connection-add" uuid="b2945e83-28fc-4f28-82f9-731eaa7fafca" name="vlan22-if" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0615] audit: op="connection-delete" uuid="5c4fb02b-17b8-3fe0-8f9e-dc676dff4023" name="Wired connection 1" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0638] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0659] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0668] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a8bcac38-77d2-49fb-9ff3-5b47df550edb)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0671] audit: op="connection-activate" uuid="a8bcac38-77d2-49fb-9ff3-5b47df550edb" name="br-ex-br" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0676] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0691] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0701] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6d114aa9-ce9d-4a2b-bd67-b2bf6a555b1c)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0706] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0719] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0729] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (89656e89-172e-40b6-8895-1641c2f58ab7)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0734] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0748] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0758] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (23e9420a-37b1-4c25-80fd-f817dbf39615)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0763] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0777] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0787] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (6e2e7f05-9ecd-4755-8bbe-3b71d7d93a8e)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0792] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0807] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0817] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a09d5a09-27f2-4403-a60d-34c9a44db1b3)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0820] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0824] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0827] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0836] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0843] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0848] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (d38a8a87-1c4d-4b14-90c5-c43dc66b8d85)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0850] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0856] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0859] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0861] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0864] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0878] device (eth1): disconnecting for new activation request.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0880] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0885] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0888] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0890] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0895] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0902] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0908] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (0723a6bf-9c7a-46be-9229-45bac9601899)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0910] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0914] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0917] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0920] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0924] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0931] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0937] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (351ed789-6090-4cd5-bedb-9c8528e07f80)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0939] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0944] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0947] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0950] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0954] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0961] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0967] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b2945e83-28fc-4f28-82f9-731eaa7fafca)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0969] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0974] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0977] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0979] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0983] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.0998] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1001] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1006] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1009] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1018] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1024] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 kernel: ovs-system: entered promiscuous mode
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1038] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1045] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1050] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1060] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1068] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1073] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1077] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 kernel: Timeout policy base is empty
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1087] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1094] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 systemd-udevd[59061]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1100] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1106] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1115] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1124] dhcp4 (eth0): canceled DHCP transaction
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1124] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1124] dhcp4 (eth0): state changed no lease
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1127] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1147] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1153] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59055 uid=0 result="fail" reason="Device is not activated"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1162] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1204] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1210] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1222] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1296] device (eth1): disconnecting for new activation request.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1301] audit: op="connection-activate" uuid="a06cac8d-0534-52c1-8613-26ec75623b46" name="ci-private-network" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1336] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59055 uid=0 result="success"
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1337] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1464] device (eth1): Activation: starting connection 'ci-private-network' (a06cac8d-0534-52c1-8613-26ec75623b46)
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1470] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1481] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1485] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 kernel: br-ex: entered promiscuous mode
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1492] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1496] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1502] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1503] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1504] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1505] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1505] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1513] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1519] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1522] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1525] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1529] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1533] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1538] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1541] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1544] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1547] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1550] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1555] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1560] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1606] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1607] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 kernel: vlan22: entered promiscuous mode
Dec  1 17:07:46 np0005541603 systemd-udevd[59060]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1637] device (eth1): Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1671] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1686] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1704] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1705] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1710] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 kernel: vlan20: entered promiscuous mode
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1837] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  1 17:07:46 np0005541603 kernel: vlan21: entered promiscuous mode
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1875] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1890] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1908] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1956] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1958] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1965] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1973] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1976] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.1983] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.2113] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.2127] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.2150] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.2153] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 17:07:46 np0005541603 NetworkManager[56278]: <info>  [1764626866.2161] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 17:07:47 np0005541603 NetworkManager[56278]: <info>  [1764626867.3612] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59055 uid=0 result="success"
Dec  1 17:07:47 np0005541603 python3.9[59387]: ansible-ansible.legacy.async_status Invoked with jid=j94689244919.59049 mode=status _async_dir=/root/.ansible_async
Dec  1 17:07:47 np0005541603 NetworkManager[56278]: <info>  [1764626867.6295] checkpoint[0x558130d08950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  1 17:07:47 np0005541603 NetworkManager[56278]: <info>  [1764626867.6309] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59055 uid=0 result="success"
Dec  1 17:07:47 np0005541603 NetworkManager[56278]: <info>  [1764626867.9965] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59055 uid=0 result="success"
Dec  1 17:07:47 np0005541603 NetworkManager[56278]: <info>  [1764626867.9982] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59055 uid=0 result="success"
Dec  1 17:07:48 np0005541603 NetworkManager[56278]: <info>  [1764626868.2723] audit: op="networking-control" arg="global-dns-configuration" pid=59055 uid=0 result="success"
Dec  1 17:07:48 np0005541603 NetworkManager[56278]: <info>  [1764626868.2818] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  1 17:07:48 np0005541603 NetworkManager[56278]: <info>  [1764626868.2859] audit: op="networking-control" arg="global-dns-configuration" pid=59055 uid=0 result="success"
Dec  1 17:07:48 np0005541603 NetworkManager[56278]: <info>  [1764626868.2892] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59055 uid=0 result="success"
Dec  1 17:07:48 np0005541603 NetworkManager[56278]: <info>  [1764626868.4218] checkpoint[0x558130d08a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  1 17:07:48 np0005541603 NetworkManager[56278]: <info>  [1764626868.4223] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59055 uid=0 result="success"
Dec  1 17:07:48 np0005541603 ansible-async_wrapper.py[59053]: Module complete (59053)
Dec  1 17:07:48 np0005541603 ansible-async_wrapper.py[59052]: 59053 still running (300)
Dec  1 17:07:51 np0005541603 python3.9[59494]: ansible-ansible.legacy.async_status Invoked with jid=j94689244919.59049 mode=status _async_dir=/root/.ansible_async
Dec  1 17:07:51 np0005541603 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 17:07:51 np0005541603 python3.9[59595]: ansible-ansible.legacy.async_status Invoked with jid=j94689244919.59049 mode=cleanup _async_dir=/root/.ansible_async
Dec  1 17:07:52 np0005541603 python3.9[59747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:07:53 np0005541603 ansible-async_wrapper.py[59052]: Done in kid B.
Dec  1 17:07:53 np0005541603 python3.9[59870]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626872.2358966-322-35368684514998/.source.returncode _original_basename=.r4ebkrhb follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:54 np0005541603 python3.9[60023]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:07:55 np0005541603 python3.9[60146]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626873.8367627-338-70202249628015/.source.cfg _original_basename=.ny7zdwr4 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:07:56 np0005541603 python3.9[60298]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:07:56 np0005541603 systemd[1]: Reloading Network Manager...
Dec  1 17:07:56 np0005541603 NetworkManager[56278]: <info>  [1764626876.4911] audit: op="reload" arg="0" pid=60302 uid=0 result="success"
Dec  1 17:07:56 np0005541603 NetworkManager[56278]: <info>  [1764626876.4921] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  1 17:07:56 np0005541603 systemd[1]: Reloaded Network Manager.
Dec  1 17:07:56 np0005541603 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Dec  1 17:07:56 np0005541603 systemd[1]: session-11.scope: Deactivated successfully.
Dec  1 17:07:56 np0005541603 systemd[1]: session-11.scope: Consumed 57.254s CPU time.
Dec  1 17:07:56 np0005541603 systemd-logind[788]: Removed session 11.
Dec  1 17:08:02 np0005541603 systemd-logind[788]: New session 12 of user zuul.
Dec  1 17:08:02 np0005541603 systemd[1]: Started Session 12 of User zuul.
Dec  1 17:08:03 np0005541603 python3.9[60488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:08:05 np0005541603 python3.9[60643]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:08:06 np0005541603 python3.9[60832]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:08:06 np0005541603 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 17:08:06 np0005541603 systemd[1]: session-12.scope: Deactivated successfully.
Dec  1 17:08:06 np0005541603 systemd[1]: session-12.scope: Consumed 3.017s CPU time.
Dec  1 17:08:06 np0005541603 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Dec  1 17:08:06 np0005541603 systemd-logind[788]: Removed session 12.
Dec  1 17:08:12 np0005541603 systemd-logind[788]: New session 13 of user zuul.
Dec  1 17:08:12 np0005541603 systemd[1]: Started Session 13 of User zuul.
Dec  1 17:08:13 np0005541603 python3.9[61015]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:08:14 np0005541603 python3.9[61169]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:08:16 np0005541603 python3.9[61325]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:08:17 np0005541603 python3.9[61412]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:08:19 np0005541603 python3.9[61565]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:08:20 np0005541603 python3.9[61758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:08:21 np0005541603 python3.9[61910]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:08:21 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:08:22 np0005541603 python3.9[62074]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:08:23 np0005541603 python3.9[62152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:08:24 np0005541603 python3.9[62304]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:08:24 np0005541603 python3.9[62382]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:08:25 np0005541603 python3.9[62534]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:08:26 np0005541603 python3.9[62686]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:08:27 np0005541603 python3.9[62838]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:08:27 np0005541603 python3.9[62992]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:08:28 np0005541603 python3.9[63144]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:08:31 np0005541603 python3.9[63297]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:08:32 np0005541603 python3.9[63451]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:08:33 np0005541603 python3.9[63603]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:08:34 np0005541603 python3.9[63755]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:08:35 np0005541603 python3.9[63908]: ansible-service_facts Invoked
Dec  1 17:08:35 np0005541603 network[63925]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:08:35 np0005541603 network[63926]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:08:35 np0005541603 network[63927]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:08:41 np0005541603 python3.9[64381]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:08:44 np0005541603 python3.9[64534]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  1 17:08:45 np0005541603 python3.9[64686]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:08:46 np0005541603 python3.9[64811]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626924.6453125-232-244102556013716/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:08:47 np0005541603 python3.9[64965]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:08:47 np0005541603 irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  1 17:08:47 np0005541603 irqbalance[782]: IRQ 26 affinity is now unmanaged
Dec  1 17:08:47 np0005541603 python3.9[65090]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626926.6377857-247-78113457795152/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:08:49 np0005541603 python3.9[65244]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:08:50 np0005541603 python3.9[65398]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:08:52 np0005541603 python3.9[65482]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:08:53 np0005541603 python3.9[65636]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:08:54 np0005541603 python3.9[65720]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:08:54 np0005541603 chronyd[795]: chronyd exiting
Dec  1 17:08:54 np0005541603 systemd[1]: Stopping NTP client/server...
Dec  1 17:08:54 np0005541603 systemd[1]: chronyd.service: Deactivated successfully.
Dec  1 17:08:54 np0005541603 systemd[1]: Stopped NTP client/server.
Dec  1 17:08:54 np0005541603 systemd[1]: Starting NTP client/server...
Dec  1 17:08:54 np0005541603 chronyd[65729]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 17:08:54 np0005541603 chronyd[65729]: Frequency -28.339 +/- 0.246 ppm read from /var/lib/chrony/drift
Dec  1 17:08:54 np0005541603 chronyd[65729]: Loaded seccomp filter (level 2)
Dec  1 17:08:54 np0005541603 systemd[1]: Started NTP client/server.
Dec  1 17:08:54 np0005541603 systemd[1]: session-13.scope: Deactivated successfully.
Dec  1 17:08:54 np0005541603 systemd[1]: session-13.scope: Consumed 30.381s CPU time.
Dec  1 17:08:54 np0005541603 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Dec  1 17:08:54 np0005541603 systemd-logind[788]: Removed session 13.
Dec  1 17:09:00 np0005541603 systemd-logind[788]: New session 14 of user zuul.
Dec  1 17:09:00 np0005541603 systemd[1]: Started Session 14 of User zuul.
Dec  1 17:09:02 np0005541603 python3.9[65910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:09:03 np0005541603 python3.9[66066]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:04 np0005541603 python3.9[66241]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:04 np0005541603 python3.9[66319]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.u3x2bnim recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:05 np0005541603 python3.9[66471]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:06 np0005541603 python3.9[66594]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626945.3134375-61-25598197868686/.source _original_basename=.r3d7v3ty follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:07 np0005541603 python3.9[66746]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:09:08 np0005541603 python3.9[66898]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:08 np0005541603 python3.9[67021]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764626947.652759-85-245765181304655/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:09:09 np0005541603 python3.9[67173]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:10 np0005541603 python3.9[67296]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764626949.0835128-85-206489800140656/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:09:11 np0005541603 python3.9[67448]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:12 np0005541603 python3.9[67600]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:12 np0005541603 python3.9[67723]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626951.3930888-122-167211788524439/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:13 np0005541603 python3.9[67875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:14 np0005541603 python3.9[67998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626952.891489-137-201783741332207/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:15 np0005541603 python3.9[68150]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:09:15 np0005541603 systemd[1]: Reloading.
Dec  1 17:09:15 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:09:15 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:09:15 np0005541603 systemd[1]: Reloading.
Dec  1 17:09:15 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:09:15 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:09:15 np0005541603 systemd[1]: Starting EDPM Container Shutdown...
Dec  1 17:09:15 np0005541603 systemd[1]: Finished EDPM Container Shutdown.
Dec  1 17:09:16 np0005541603 python3.9[68375]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:17 np0005541603 python3.9[68498]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626956.1687524-160-92856152509514/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:18 np0005541603 python3.9[68650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:19 np0005541603 python3.9[68773]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626957.8005748-175-29543526559668/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:20 np0005541603 python3.9[68925]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:09:20 np0005541603 systemd[1]: Reloading.
Dec  1 17:09:20 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:09:20 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:09:20 np0005541603 systemd[1]: Reloading.
Dec  1 17:09:20 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:09:20 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:09:20 np0005541603 systemd[1]: Starting Create netns directory...
Dec  1 17:09:20 np0005541603 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 17:09:20 np0005541603 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 17:09:20 np0005541603 systemd[1]: Finished Create netns directory.
Dec  1 17:09:21 np0005541603 python3.9[69151]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:09:21 np0005541603 network[69168]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:09:21 np0005541603 network[69169]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:09:21 np0005541603 network[69170]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:09:27 np0005541603 python3.9[69432]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:09:27 np0005541603 systemd[1]: Reloading.
Dec  1 17:09:27 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:09:27 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:09:27 np0005541603 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  1 17:09:27 np0005541603 iptables.init[69473]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  1 17:09:28 np0005541603 iptables.init[69473]: iptables: Flushing firewall rules: [  OK  ]
Dec  1 17:09:28 np0005541603 systemd[1]: iptables.service: Deactivated successfully.
Dec  1 17:09:28 np0005541603 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  1 17:09:28 np0005541603 python3.9[69670]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:09:29 np0005541603 python3.9[69824]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:09:29 np0005541603 systemd[1]: Reloading.
Dec  1 17:09:30 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:09:30 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:09:30 np0005541603 systemd[1]: Starting Netfilter Tables...
Dec  1 17:09:30 np0005541603 systemd[1]: Finished Netfilter Tables.
Dec  1 17:09:31 np0005541603 python3.9[70016]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:09:32 np0005541603 python3.9[70169]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:33 np0005541603 python3.9[70294]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626972.3118482-244-152685437813229/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:34 np0005541603 python3.9[70447]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:09:34 np0005541603 systemd[1]: Reloading OpenSSH server daemon...
Dec  1 17:09:34 np0005541603 systemd[1]: Reloaded OpenSSH server daemon.
Dec  1 17:09:35 np0005541603 python3.9[70603]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:36 np0005541603 python3.9[70755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:36 np0005541603 python3.9[70878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626975.4958098-275-140753550355213/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:37 np0005541603 python3.9[71030]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 17:09:37 np0005541603 systemd[1]: Starting Time & Date Service...
Dec  1 17:09:37 np0005541603 systemd[1]: Started Time & Date Service.
Dec  1 17:09:38 np0005541603 python3.9[71186]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:39 np0005541603 python3.9[71338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:40 np0005541603 python3.9[71461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626978.849747-310-114363261397875/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:40 np0005541603 python3.9[71613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:41 np0005541603 python3.9[71736]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764626980.3590055-325-9886679495412/.source.yaml _original_basename=.k108ulfa follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:42 np0005541603 python3.9[71890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:43 np0005541603 python3.9[72013]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626981.7677555-340-275579108092674/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:43 np0005541603 python3.9[72165]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:09:44 np0005541603 python3.9[72318]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:09:45 np0005541603 python3[72471]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 17:09:46 np0005541603 python3.9[72625]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:47 np0005541603 python3.9[72748]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626985.9475822-379-89482028130857/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:48 np0005541603 python3.9[72900]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:48 np0005541603 python3.9[73023]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626987.4461029-394-143343804505245/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:49 np0005541603 python3.9[73175]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:50 np0005541603 python3.9[73298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626989.0707703-409-56422706256209/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:51 np0005541603 python3.9[73450]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:51 np0005541603 python3.9[73573]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626990.6503806-424-100407224273148/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:52 np0005541603 python3.9[73727]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:09:53 np0005541603 python3.9[73850]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764626992.0737705-439-198761941909900/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:54 np0005541603 python3.9[74002]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:54 np0005541603 python3.9[74154]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:09:56 np0005541603 python3.9[74313]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:57 np0005541603 python3.9[74466]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:57 np0005541603 python3.9[74618]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:09:58 np0005541603 python3.9[74770]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 17:09:58 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:09:58 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:09:59 np0005541603 python3.9[74924]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 17:10:00 np0005541603 systemd[1]: session-14.scope: Deactivated successfully.
Dec  1 17:10:00 np0005541603 systemd[1]: session-14.scope: Consumed 42.825s CPU time.
Dec  1 17:10:00 np0005541603 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Dec  1 17:10:00 np0005541603 systemd-logind[788]: Removed session 14.
Dec  1 17:10:05 np0005541603 systemd-logind[788]: New session 15 of user zuul.
Dec  1 17:10:05 np0005541603 systemd[1]: Started Session 15 of User zuul.
Dec  1 17:10:06 np0005541603 python3.9[75107]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  1 17:10:07 np0005541603 python3.9[75259]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:10:08 np0005541603 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 17:10:08 np0005541603 python3.9[75413]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:10:09 np0005541603 python3.9[75565]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL2AWoEtNuTJrLW6YRgTR6nsfYb8TPcN5NHmyd/6E69AIu9KZJxXsbg4lWswJVHJRsaE5rK+dGP+vRkmFsIgHcrUabiG/e8uWXc5PLgTr8ro3K9VuTZDj/53vlohPBjCnWS98QVre+LZZmYVuIbjBpGf5/+zjhNohnzB8A/P/olhGFruf4+MgQ5S5XxJUmlrtN0lM8nt6qY0vZ1ZA6n2C9CFYaIZVaFW6cYNgTRFb4x1lUPwgJnklIl8UOHNxGdE3yJeA3g35wSgp06y2WAuhr/rzLsV/5tJd9OUBBEI6Bv5BlwLX6PilQpla4COuCWX9sJEt7xQUg41AypQ7FmfGry+gakZzbhmU5/LT4V05j/p0orYAa8sa2/wpBAp/5F7wY5yf/wx+/gJ+3bEBZT7w0ldpboWP8XfCRKuCS9mpXaHaBqZgqBpkDGdGo7PXaU0iIX2Dc4c3ShHfzLptGCsyYn4Md+vg/Ssty3wpSmFn3LTXBvUugrtK1QNK4G+6XwHE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICw2Lkt2wYeqw5Es0dd/f2RAMjaDrXARP4jdSy6emOxo#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJWzBLpQeEtrr/Wn83M6XPhP+9/Mq88DKQulYZOFvIsIpNA/UvSW05Uknj+r8Ed96VzQ5mRytqshigqSXWYotlA=#012 create=True mode=0644 path=/tmp/ansible.jddhr8be state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:10:10 np0005541603 python3.9[75717]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.jddhr8be' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:10:11 np0005541603 python3.9[75871]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.jddhr8be state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:10:11 np0005541603 systemd[1]: session-15.scope: Deactivated successfully.
Dec  1 17:10:11 np0005541603 systemd[1]: session-15.scope: Consumed 4.058s CPU time.
Dec  1 17:10:11 np0005541603 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Dec  1 17:10:11 np0005541603 systemd-logind[788]: Removed session 15.
Dec  1 17:10:17 np0005541603 systemd-logind[788]: New session 16 of user zuul.
Dec  1 17:10:17 np0005541603 systemd[1]: Started Session 16 of User zuul.
Dec  1 17:10:18 np0005541603 python3.9[76049]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:10:20 np0005541603 python3.9[76205]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 17:10:21 np0005541603 python3.9[76359]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:10:22 np0005541603 python3.9[76512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:10:23 np0005541603 python3.9[76665]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:10:24 np0005541603 python3.9[76819]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:10:25 np0005541603 python3.9[76974]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:10:25 np0005541603 systemd[1]: session-16.scope: Deactivated successfully.
Dec  1 17:10:25 np0005541603 systemd[1]: session-16.scope: Consumed 5.413s CPU time.
Dec  1 17:10:25 np0005541603 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Dec  1 17:10:25 np0005541603 systemd-logind[788]: Removed session 16.
Dec  1 17:10:31 np0005541603 systemd-logind[788]: New session 17 of user zuul.
Dec  1 17:10:31 np0005541603 systemd[1]: Started Session 17 of User zuul.
Dec  1 17:10:33 np0005541603 python3.9[77152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:10:34 np0005541603 python3.9[77308]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:10:35 np0005541603 python3.9[77392]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 17:10:37 np0005541603 python3.9[77543]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:10:39 np0005541603 python3.9[77694]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:10:39 np0005541603 python3.9[77844]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:10:40 np0005541603 python3.9[77994]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:10:41 np0005541603 systemd[1]: session-17.scope: Deactivated successfully.
Dec  1 17:10:41 np0005541603 systemd[1]: session-17.scope: Consumed 6.819s CPU time.
Dec  1 17:10:41 np0005541603 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Dec  1 17:10:41 np0005541603 systemd-logind[788]: Removed session 17.
Dec  1 17:10:47 np0005541603 systemd-logind[788]: New session 18 of user zuul.
Dec  1 17:10:47 np0005541603 systemd[1]: Started Session 18 of User zuul.
Dec  1 17:10:48 np0005541603 python3.9[78172]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:10:50 np0005541603 python3.9[78328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:10:50 np0005541603 python3.9[78480]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:10:51 np0005541603 python3.9[78632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:10:52 np0005541603 python3.9[78755]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627051.1390421-65-153745829548798/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7aabf9824b930980f6c7f49eaeb4a97ff2e01d1d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:10:53 np0005541603 python3.9[78907]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:10:54 np0005541603 python3.9[79030]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627052.8209789-65-52205452449625/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f2c91ad1a3e42ff28fdcf64747d0c68381b07bfa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:10:54 np0005541603 python3.9[79182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:10:55 np0005541603 python3.9[79305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627054.3701694-65-75073069200365/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a4c51f47081c6abd0a486594df88d4a894140f93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:10:56 np0005541603 python3.9[79457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:10:57 np0005541603 python3.9[79609]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:10:58 np0005541603 python3.9[79761]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:10:58 np0005541603 python3.9[79884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627057.4834661-124-126277060500044/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a252478506da1ce7810ab74bb1cc274decff37f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:10:59 np0005541603 python3.9[80036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:00 np0005541603 python3.9[80159]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627058.889476-124-52437058287003/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f2c91ad1a3e42ff28fdcf64747d0c68381b07bfa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:00 np0005541603 python3.9[80311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:01 np0005541603 python3.9[80434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627060.4231033-124-192329791410288/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eee32d3e1f5d1bfe2b1a4a0724d03866b346008a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:02 np0005541603 python3.9[80586]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:03 np0005541603 python3.9[80738]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:04 np0005541603 chronyd[65729]: Selected source 199.182.221.110 (pool.ntp.org)
Dec  1 17:11:04 np0005541603 python3.9[80890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:04 np0005541603 python3.9[81013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627063.4876947-183-43937798044337/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=86d7dedc780dd44615f9d0db2f336072b1bd5604 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:05 np0005541603 python3.9[81165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:06 np0005541603 python3.9[81288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627065.0008888-183-68332246272183/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=51e951d913fb697c2f0031cddfe628c234de4db8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:07 np0005541603 python3.9[81440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:07 np0005541603 python3.9[81563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627066.468089-183-97691192663484/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=241ab0843b69068c8d79bdd1a2ce6fbe409dc99c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:08 np0005541603 python3.9[81715]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:09 np0005541603 python3.9[81867]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:10 np0005541603 python3.9[82019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:10 np0005541603 python3.9[82142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627069.6464818-242-66807399695384/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=56a2870e123e94eff10877f58ff4854274666cdc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:11 np0005541603 python3.9[82294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:12 np0005541603 python3.9[82417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627071.1957858-242-8759256572732/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d0064bd805fd2d08b456562460bacfa0225d715 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:13 np0005541603 python3.9[82569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:14 np0005541603 python3.9[82692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627072.6808512-242-110151629943957/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b85a36994aae8322f446589bbd77229f19314bb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:14 np0005541603 python3.9[82844]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:15 np0005541603 python3.9[82996]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:16 np0005541603 python3.9[83148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:17 np0005541603 python3.9[83271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627076.0123832-301-135485995439243/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ce02d42d27c12ef576741c23add0fdc8e7554ab2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:18 np0005541603 python3.9[83423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:18 np0005541603 python3.9[83546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627077.5012527-301-113871486312869/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=51e951d913fb697c2f0031cddfe628c234de4db8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:19 np0005541603 python3.9[83698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:20 np0005541603 python3.9[83823]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627078.9223294-301-168875264954006/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=507f801a7e25b5c0d7777ef6520fee78aaef47f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:21 np0005541603 python3.9[83975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:22 np0005541603 python3.9[84127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:23 np0005541603 python3.9[84250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627081.9985974-369-104781955990490/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:24 np0005541603 python3.9[84402]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:24 np0005541603 python3.9[84554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:25 np0005541603 python3.9[84677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627084.3375664-393-20910464539049/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:26 np0005541603 python3.9[84829]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:27 np0005541603 python3.9[84981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:28 np0005541603 python3.9[85104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627086.867568-417-236753406338847/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:29 np0005541603 python3.9[85256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:29 np0005541603 python3.9[85408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:30 np0005541603 python3.9[85531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627089.2484257-441-177232563704236/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:31 np0005541603 python3.9[85683]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:33 np0005541603 python3.9[85836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:34 np0005541603 python3.9[85959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627092.8019505-465-214763458724355/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:35 np0005541603 python3.9[86111]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:35 np0005541603 python3.9[86263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:36 np0005541603 python3.9[86386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627095.2527394-489-33426854424940/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:37 np0005541603 python3.9[86538]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:38 np0005541603 python3.9[86690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:38 np0005541603 python3.9[86813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627097.6696343-513-230667999965685/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:39 np0005541603 python3.9[86965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:40 np0005541603 python3.9[87117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:11:41 np0005541603 python3.9[87240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627100.0236793-537-264878617981549/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81ec6f5b857a0813598f2d4eac5c983645f334f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:11:41 np0005541603 systemd[1]: session-18.scope: Deactivated successfully.
Dec  1 17:11:41 np0005541603 systemd[1]: session-18.scope: Consumed 43.626s CPU time.
Dec  1 17:11:41 np0005541603 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Dec  1 17:11:41 np0005541603 systemd-logind[788]: Removed session 18.
Dec  1 17:11:48 np0005541603 systemd-logind[788]: New session 19 of user zuul.
Dec  1 17:11:48 np0005541603 systemd[1]: Started Session 19 of User zuul.
Dec  1 17:11:49 np0005541603 python3.9[87419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:11:50 np0005541603 python3.9[87575]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:51 np0005541603 python3.9[87727]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:11:52 np0005541603 python3.9[87877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:11:53 np0005541603 python3.9[88029]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 17:11:55 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  1 17:11:55 np0005541603 python3.9[88185]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:11:57 np0005541603 python3.9[88269]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:11:59 np0005541603 python3.9[88422]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:12:00 np0005541603 python3[88577]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  1 17:12:01 np0005541603 python3.9[88729]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:02 np0005541603 python3.9[88881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:03 np0005541603 python3.9[88959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:04 np0005541603 python3.9[89111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:04 np0005541603 python3.9[89191]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3m99d3k3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:05 np0005541603 python3.9[89343]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:05 np0005541603 python3.9[89421]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:06 np0005541603 python3.9[89573]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:07 np0005541603 python3[89726]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 17:12:08 np0005541603 python3.9[89878]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:09 np0005541603 python3.9[90003]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627128.106974-157-123421780839271/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:10 np0005541603 python3.9[90155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:11 np0005541603 python3.9[90280]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627129.8000753-172-280035400770562/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:12 np0005541603 python3.9[90432]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:12 np0005541603 python3.9[90557]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627131.406846-187-271438605807852/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:13 np0005541603 python3.9[90709]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:14 np0005541603 python3.9[90834]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627132.9998918-202-157925573032121/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:15 np0005541603 python3.9[90986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:15 np0005541603 python3.9[91113]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627134.5197024-217-10558817454599/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:16 np0005541603 python3.9[91265]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:17 np0005541603 python3.9[91417]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:18 np0005541603 python3.9[91572]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:19 np0005541603 python3.9[91724]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:20 np0005541603 python3.9[91877]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:12:21 np0005541603 python3.9[92031]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:22 np0005541603 python3.9[92186]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:23 np0005541603 python3.9[92336]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:12:24 np0005541603 python3.9[92491]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:24 np0005541603 ovs-vsctl[92492]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  1 17:12:25 np0005541603 python3.9[92644]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:26 np0005541603 python3.9[92799]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:26 np0005541603 ovs-vsctl[92800]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  1 17:12:27 np0005541603 python3.9[92950]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:12:28 np0005541603 python3.9[93104]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:12:29 np0005541603 python3.9[93256]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:29 np0005541603 python3.9[93334]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:12:30 np0005541603 python3.9[93487]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:30 np0005541603 python3.9[93565]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:12:31 np0005541603 python3.9[93717]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:32 np0005541603 python3.9[93869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:33 np0005541603 python3.9[93947]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:33 np0005541603 python3.9[94099]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:34 np0005541603 python3.9[94177]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:35 np0005541603 python3.9[94329]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:12:35 np0005541603 systemd[1]: Reloading.
Dec  1 17:12:35 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:12:35 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:12:36 np0005541603 python3.9[94519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:37 np0005541603 python3.9[94597]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:37 np0005541603 python3.9[94749]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:38 np0005541603 python3.9[94829]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:39 np0005541603 python3.9[94981]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:12:39 np0005541603 systemd[1]: Reloading.
Dec  1 17:12:39 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:12:39 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:12:39 np0005541603 systemd[1]: Starting Create netns directory...
Dec  1 17:12:39 np0005541603 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 17:12:39 np0005541603 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 17:12:39 np0005541603 systemd[1]: Finished Create netns directory.
Dec  1 17:12:40 np0005541603 python3.9[95175]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:12:41 np0005541603 python3.9[95327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:42 np0005541603 python3.9[95450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627161.0258079-468-14720890377019/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:12:43 np0005541603 python3.9[95602]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:12:44 np0005541603 python3.9[95754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:12:44 np0005541603 python3.9[95877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627163.6188893-493-126079987484124/.source.json _original_basename=.xtmbezyu follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:45 np0005541603 python3.9[96029]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:48 np0005541603 python3.9[96456]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  1 17:12:49 np0005541603 python3.9[96608]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:12:50 np0005541603 python3.9[96760]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 17:12:50 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:12:52 np0005541603 python3[96923]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:12:52 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:12:52 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:12:52 np0005541603 podman[96958]: 2025-12-01 22:12:52.328319953 +0000 UTC m=+0.062837051 container create 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  1 17:12:52 np0005541603 podman[96958]: 2025-12-01 22:12:52.296832541 +0000 UTC m=+0.031349659 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 17:12:52 np0005541603 python3[96923]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 17:12:53 np0005541603 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 17:12:53 np0005541603 python3.9[97146]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:12:54 np0005541603 python3.9[97300]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:54 np0005541603 python3.9[97376]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:12:55 np0005541603 python3.9[97527]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627174.9251847-581-19368242214327/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:12:56 np0005541603 python3.9[97603]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:12:56 np0005541603 systemd[1]: Reloading.
Dec  1 17:12:56 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:12:56 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:12:57 np0005541603 python3.9[97714]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:12:57 np0005541603 systemd[1]: Reloading.
Dec  1 17:12:57 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:12:57 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:12:57 np0005541603 systemd[1]: Starting ovn_controller container...
Dec  1 17:12:57 np0005541603 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  1 17:12:57 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:12:57 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae679bdede7c68fd3eb7bdb674b66305bf0d73545018c26f4dfb9ba1542d2c37/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 17:12:57 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.
Dec  1 17:12:57 np0005541603 podman[97755]: 2025-12-01 22:12:57.95539793 +0000 UTC m=+0.140569065 container init 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 17:12:57 np0005541603 ovn_controller[97770]: + sudo -E kolla_set_configs
Dec  1 17:12:57 np0005541603 podman[97755]: 2025-12-01 22:12:57.991557555 +0000 UTC m=+0.176728640 container start 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 17:12:57 np0005541603 edpm-start-podman-container[97755]: ovn_controller
Dec  1 17:12:58 np0005541603 systemd[1]: Created slice User Slice of UID 0.
Dec  1 17:12:58 np0005541603 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  1 17:12:58 np0005541603 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  1 17:12:58 np0005541603 systemd[1]: Starting User Manager for UID 0...
Dec  1 17:12:58 np0005541603 edpm-start-podman-container[97754]: Creating additional drop-in dependency for "ovn_controller" (6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367)
Dec  1 17:12:58 np0005541603 podman[97777]: 2025-12-01 22:12:58.087770312 +0000 UTC m=+0.078076634 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:12:58 np0005541603 systemd[1]: 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367-5efcb32b5aac2091.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:12:58 np0005541603 systemd[1]: 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367-5efcb32b5aac2091.service: Failed with result 'exit-code'.
Dec  1 17:12:58 np0005541603 systemd[1]: Reloading.
Dec  1 17:12:58 np0005541603 systemd[97804]: Queued start job for default target Main User Target.
Dec  1 17:12:58 np0005541603 systemd[97804]: Created slice User Application Slice.
Dec  1 17:12:58 np0005541603 systemd[97804]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  1 17:12:58 np0005541603 systemd[97804]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 17:12:58 np0005541603 systemd[97804]: Reached target Paths.
Dec  1 17:12:58 np0005541603 systemd[97804]: Reached target Timers.
Dec  1 17:12:58 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:12:58 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:12:58 np0005541603 systemd[97804]: Starting D-Bus User Message Bus Socket...
Dec  1 17:12:58 np0005541603 systemd[97804]: Starting Create User's Volatile Files and Directories...
Dec  1 17:12:58 np0005541603 systemd[97804]: Finished Create User's Volatile Files and Directories.
Dec  1 17:12:58 np0005541603 systemd[97804]: Listening on D-Bus User Message Bus Socket.
Dec  1 17:12:58 np0005541603 systemd[97804]: Reached target Sockets.
Dec  1 17:12:58 np0005541603 systemd[97804]: Reached target Basic System.
Dec  1 17:12:58 np0005541603 systemd[97804]: Reached target Main User Target.
Dec  1 17:12:58 np0005541603 systemd[97804]: Startup finished in 144ms.
Dec  1 17:12:58 np0005541603 systemd[1]: Started User Manager for UID 0.
Dec  1 17:12:58 np0005541603 systemd[1]: Started ovn_controller container.
Dec  1 17:12:58 np0005541603 systemd[1]: Started Session c1 of User root.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: INFO:__main__:Validating config file
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: INFO:__main__:Writing out command to execute
Dec  1 17:12:58 np0005541603 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: ++ cat /run_command
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + ARGS=
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + sudo kolla_copy_cacerts
Dec  1 17:12:58 np0005541603 systemd[1]: Started Session c2 of User root.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + [[ ! -n '' ]]
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + . kolla_extend_start
Dec  1 17:12:58 np0005541603 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + umask 0022
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5083] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5092] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5107] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5116] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5123] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 17:12:58 np0005541603 kernel: br-int: entered promiscuous mode
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 17:12:58 np0005541603 ovn_controller[97770]: 2025-12-01T22:12:58Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5340] manager: (ovn-43a95e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5537] device (genev_sys_6081): carrier: link connected
Dec  1 17:12:58 np0005541603 NetworkManager[56278]: <info>  [1764627178.5540] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec  1 17:12:58 np0005541603 kernel: genev_sys_6081: entered promiscuous mode
Dec  1 17:12:58 np0005541603 systemd-udevd[97920]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 17:12:58 np0005541603 systemd-udevd[97912]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 17:12:59 np0005541603 python3.9[98038]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:12:59 np0005541603 ovs-vsctl[98039]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  1 17:13:00 np0005541603 python3.9[98191]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:13:00 np0005541603 ovs-vsctl[98193]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  1 17:13:01 np0005541603 python3.9[98346]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:13:01 np0005541603 ovs-vsctl[98347]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  1 17:13:01 np0005541603 systemd[1]: session-19.scope: Deactivated successfully.
Dec  1 17:13:01 np0005541603 systemd[1]: session-19.scope: Consumed 55.823s CPU time.
Dec  1 17:13:01 np0005541603 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Dec  1 17:13:01 np0005541603 systemd-logind[788]: Removed session 19.
Dec  1 17:13:06 np0005541603 systemd-logind[788]: New session 21 of user zuul.
Dec  1 17:13:06 np0005541603 systemd[1]: Started Session 21 of User zuul.
Dec  1 17:13:07 np0005541603 python3.9[98525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:13:08 np0005541603 systemd[1]: Stopping User Manager for UID 0...
Dec  1 17:13:08 np0005541603 systemd[97804]: Activating special unit Exit the Session...
Dec  1 17:13:08 np0005541603 systemd[97804]: Stopped target Main User Target.
Dec  1 17:13:08 np0005541603 systemd[97804]: Stopped target Basic System.
Dec  1 17:13:08 np0005541603 systemd[97804]: Stopped target Paths.
Dec  1 17:13:08 np0005541603 systemd[97804]: Stopped target Sockets.
Dec  1 17:13:08 np0005541603 systemd[97804]: Stopped target Timers.
Dec  1 17:13:08 np0005541603 systemd[97804]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  1 17:13:08 np0005541603 systemd[97804]: Closed D-Bus User Message Bus Socket.
Dec  1 17:13:08 np0005541603 systemd[97804]: Stopped Create User's Volatile Files and Directories.
Dec  1 17:13:08 np0005541603 systemd[97804]: Removed slice User Application Slice.
Dec  1 17:13:08 np0005541603 systemd[97804]: Reached target Shutdown.
Dec  1 17:13:08 np0005541603 systemd[97804]: Finished Exit the Session.
Dec  1 17:13:08 np0005541603 systemd[97804]: Reached target Exit the Session.
Dec  1 17:13:08 np0005541603 systemd[1]: user@0.service: Deactivated successfully.
Dec  1 17:13:08 np0005541603 systemd[1]: Stopped User Manager for UID 0.
Dec  1 17:13:08 np0005541603 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  1 17:13:08 np0005541603 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  1 17:13:08 np0005541603 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  1 17:13:08 np0005541603 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  1 17:13:08 np0005541603 systemd[1]: Removed slice User Slice of UID 0.
Dec  1 17:13:09 np0005541603 python3.9[98683]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:09 np0005541603 python3.9[98835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:10 np0005541603 python3.9[98987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:11 np0005541603 python3.9[99139]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:12 np0005541603 python3.9[99291]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:13 np0005541603 python3.9[99441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:13:14 np0005541603 python3.9[99593]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 17:13:16 np0005541603 python3.9[99743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:16 np0005541603 python3.9[99865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627195.1933522-86-27360975203406/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:17 np0005541603 python3.9[100015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:18 np0005541603 python3.9[100136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627197.1202426-101-42966662579771/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:19 np0005541603 python3.9[100288]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:13:20 np0005541603 python3.9[100372]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:13:23 np0005541603 python3.9[100525]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:13:23 np0005541603 python3.9[100678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:24 np0005541603 python3.9[100799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627203.3955235-138-49312359051173/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:25 np0005541603 python3.9[100949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:26 np0005541603 python3.9[101070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627204.8597622-138-148425693782732/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:27 np0005541603 python3.9[101220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:28 np0005541603 python3.9[101341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627206.8665454-182-270884985925309/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:28 np0005541603 ovn_controller[97770]: 2025-12-01T22:13:28Z|00025|memory|INFO|16000 kB peak resident set size after 29.8 seconds
Dec  1 17:13:28 np0005541603 ovn_controller[97770]: 2025-12-01T22:13:28Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  1 17:13:28 np0005541603 podman[101342]: 2025-12-01 22:13:28.30651572 +0000 UTC m=+0.139386754 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 17:13:28 np0005541603 python3.9[101518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:29 np0005541603 python3.9[101639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627208.4175363-182-191773438080655/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:30 np0005541603 python3.9[101789]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:13:31 np0005541603 python3.9[101943]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:32 np0005541603 python3.9[102095]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:33 np0005541603 python3.9[102173]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:34 np0005541603 python3.9[102325]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:34 np0005541603 python3.9[102403]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:35 np0005541603 python3.9[102555]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:36 np0005541603 python3.9[102707]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:36 np0005541603 python3.9[102785]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:37 np0005541603 python3.9[102937]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:38 np0005541603 python3.9[103015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:39 np0005541603 python3.9[103167]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:13:39 np0005541603 systemd[1]: Reloading.
Dec  1 17:13:39 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:13:39 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:13:40 np0005541603 python3.9[103357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:41 np0005541603 python3.9[103435]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:42 np0005541603 python3.9[103587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:42 np0005541603 python3.9[103665]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:43 np0005541603 python3.9[103817]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:13:43 np0005541603 systemd[1]: Reloading.
Dec  1 17:13:43 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:13:43 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:13:43 np0005541603 systemd[1]: Starting Create netns directory...
Dec  1 17:13:43 np0005541603 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 17:13:43 np0005541603 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 17:13:43 np0005541603 systemd[1]: Finished Create netns directory.
Dec  1 17:13:44 np0005541603 python3.9[104011]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:45 np0005541603 python3.9[104164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:46 np0005541603 python3.9[104287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627225.1827767-333-115792704285014/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:47 np0005541603 python3.9[104439]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:13:48 np0005541603 python3.9[104591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:13:49 np0005541603 python3.9[104714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627227.8416724-358-210572208704926/.source.json _original_basename=.lwt6xugy follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:49 np0005541603 python3.9[104866]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:52 np0005541603 python3.9[105295]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  1 17:13:53 np0005541603 python3.9[105447]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:13:54 np0005541603 python3.9[105599]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 17:13:56 np0005541603 python3[105777]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:13:57 np0005541603 podman[105815]: 2025-12-01 22:13:57.008606865 +0000 UTC m=+0.067382452 container create ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Dec  1 17:13:57 np0005541603 podman[105815]: 2025-12-01 22:13:56.970977496 +0000 UTC m=+0.029753163 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 17:13:57 np0005541603 python3[105777]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 17:13:58 np0005541603 python3.9[106005]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:13:58 np0005541603 podman[106128]: 2025-12-01 22:13:58.857200616 +0000 UTC m=+0.122491080 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 17:13:58 np0005541603 python3.9[106174]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:13:59 np0005541603 python3.9[106261]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:14:00 np0005541603 python3.9[106412]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627239.5821993-446-41229725295099/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:00 np0005541603 python3.9[106488]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:14:00 np0005541603 systemd[1]: Reloading.
Dec  1 17:14:01 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:14:01 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:14:02 np0005541603 python3.9[106599]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:02 np0005541603 systemd[1]: Reloading.
Dec  1 17:14:02 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:14:02 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:14:02 np0005541603 systemd[1]: Starting ovn_metadata_agent container...
Dec  1 17:14:02 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:14:02 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a82f82e03258ff434b3c36acf664689ad833c82e27f46976d5bbfa60a83f65/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  1 17:14:02 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a82f82e03258ff434b3c36acf664689ad833c82e27f46976d5bbfa60a83f65/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 17:14:02 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.
Dec  1 17:14:02 np0005541603 podman[106641]: 2025-12-01 22:14:02.548483374 +0000 UTC m=+0.185224249 container init ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + sudo -E kolla_set_configs
Dec  1 17:14:02 np0005541603 podman[106641]: 2025-12-01 22:14:02.5844841 +0000 UTC m=+0.221224895 container start ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:14:02 np0005541603 edpm-start-podman-container[106641]: ovn_metadata_agent
Dec  1 17:14:02 np0005541603 podman[106664]: 2025-12-01 22:14:02.658525626 +0000 UTC m=+0.058334074 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:14:02 np0005541603 edpm-start-podman-container[106640]: Creating additional drop-in dependency for "ovn_metadata_agent" (ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4)
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Validating config file
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Copying service configuration files
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Writing out command to execute
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: ++ cat /run_command
Dec  1 17:14:02 np0005541603 systemd[1]: Reloading.
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + CMD=neutron-ovn-metadata-agent
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + ARGS=
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + sudo kolla_copy_cacerts
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + [[ ! -n '' ]]
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + . kolla_extend_start
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: Running command: 'neutron-ovn-metadata-agent'
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + umask 0022
Dec  1 17:14:02 np0005541603 ovn_metadata_agent[106657]: + exec neutron-ovn-metadata-agent
Dec  1 17:14:02 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:14:02 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:14:02 np0005541603 systemd[1]: Started ovn_metadata_agent container.
Dec  1 17:14:03 np0005541603 systemd[1]: session-21.scope: Deactivated successfully.
Dec  1 17:14:03 np0005541603 systemd[1]: session-21.scope: Consumed 42.590s CPU time.
Dec  1 17:14:03 np0005541603 systemd-logind[788]: Session 21 logged out. Waiting for processes to exit.
Dec  1 17:14:03 np0005541603 systemd-logind[788]: Removed session 21.
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.548 106662 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.548 106662 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.548 106662 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.549 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.549 106662 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.549 106662 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.550 106662 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.550 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.550 106662 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.550 106662 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.550 106662 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.551 106662 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.552 106662 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.553 106662 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.554 106662 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.555 106662 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.556 106662 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.557 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.558 106662 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.559 106662 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.560 106662 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.561 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.562 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.562 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.562 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.562 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.562 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.562 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.562 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.563 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.564 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.565 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.566 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.567 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.568 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.569 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.570 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.571 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.572 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.573 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.574 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.575 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.576 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.577 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.578 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.579 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.580 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.581 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.582 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.583 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.583 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.583 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.583 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.583 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.583 106662 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.583 106662 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.593 106662 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.594 106662 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.594 106662 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.594 106662 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.594 106662 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.606 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e (UUID: 345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.630 106662 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.631 106662 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.631 106662 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.631 106662 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.635 106662 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.641 106662 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.646 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], external_ids={}, name=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, nb_cfg_timestamp=1764627186535, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.647 106662 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb9ca86f160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.648 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.648 106662 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.649 106662 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.649 106662 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.654 106662 DEBUG oslo_service.service [-] Started child 106765 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.658 106765 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-165694'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.658 106662 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpb_x9dv_a/privsep.sock']#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.682 106765 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.683 106765 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.683 106765 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.686 106765 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.692 106765 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 17:14:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:04.697 106765 INFO eventlet.wsgi.server [-] (106765) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  1 17:14:05 np0005541603 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.343 106662 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.344 106662 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpb_x9dv_a/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.209 106770 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.214 106770 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.216 106770 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.216 106770 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106770#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.347 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[2c445634-ea5f-4db9-bb59-a33656721688]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.830 106770 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.830 106770 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:14:05 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:05.830 106770 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.343 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[8b36899f-ccdc-4c73-a110-3ec25c779e67]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.347 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, column=external_ids, values=({'neutron:ovn-metadata-id': '45459c11-cec4-5b3b-8f4f-19a4eeaca11e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.357 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.365 106662 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.365 106662 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.365 106662 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.366 106662 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.366 106662 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.366 106662 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.366 106662 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.366 106662 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.367 106662 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.367 106662 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.367 106662 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.367 106662 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.368 106662 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.368 106662 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.368 106662 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.368 106662 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.368 106662 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.369 106662 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.369 106662 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.369 106662 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.369 106662 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.369 106662 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.369 106662 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.370 106662 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.370 106662 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.370 106662 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.370 106662 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.371 106662 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.371 106662 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.371 106662 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.371 106662 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.371 106662 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.372 106662 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.372 106662 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.372 106662 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.372 106662 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.372 106662 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.373 106662 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.373 106662 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.373 106662 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.373 106662 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.374 106662 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.374 106662 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.374 106662 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.374 106662 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.374 106662 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.374 106662 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.375 106662 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.375 106662 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.375 106662 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.375 106662 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.375 106662 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.376 106662 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.376 106662 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.376 106662 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.376 106662 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.376 106662 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.376 106662 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.377 106662 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.377 106662 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.377 106662 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.377 106662 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.377 106662 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.378 106662 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.378 106662 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.378 106662 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.378 106662 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.378 106662 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.379 106662 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.379 106662 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.379 106662 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.379 106662 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.379 106662 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.380 106662 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.380 106662 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.380 106662 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.380 106662 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.380 106662 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.380 106662 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.381 106662 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.381 106662 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.381 106662 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.381 106662 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.381 106662 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.382 106662 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.382 106662 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.382 106662 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.382 106662 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.382 106662 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.382 106662 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.383 106662 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.383 106662 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.383 106662 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.383 106662 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.383 106662 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.384 106662 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.384 106662 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.384 106662 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.384 106662 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.384 106662 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.384 106662 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.385 106662 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.385 106662 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.385 106662 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.385 106662 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.385 106662 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.385 106662 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.386 106662 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.386 106662 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.386 106662 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.387 106662 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.387 106662 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.387 106662 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.387 106662 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.388 106662 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.388 106662 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.388 106662 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.388 106662 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.388 106662 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.388 106662 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.389 106662 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.389 106662 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.389 106662 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.389 106662 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.389 106662 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.390 106662 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.390 106662 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.390 106662 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.390 106662 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.390 106662 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.391 106662 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.391 106662 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.391 106662 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.391 106662 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.391 106662 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.392 106662 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.392 106662 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.392 106662 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.392 106662 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.392 106662 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.393 106662 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.393 106662 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.393 106662 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.393 106662 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.393 106662 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.393 106662 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.394 106662 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.394 106662 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.394 106662 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.394 106662 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.394 106662 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.395 106662 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.395 106662 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.395 106662 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.395 106662 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.395 106662 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.396 106662 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.396 106662 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.396 106662 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.396 106662 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.396 106662 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.396 106662 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.397 106662 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.397 106662 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.397 106662 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.397 106662 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.397 106662 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.397 106662 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.398 106662 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.398 106662 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.398 106662 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.398 106662 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.398 106662 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.399 106662 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.399 106662 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.399 106662 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.399 106662 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.399 106662 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.399 106662 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.400 106662 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.400 106662 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.400 106662 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.400 106662 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.400 106662 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.401 106662 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.401 106662 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.401 106662 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.401 106662 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.401 106662 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.402 106662 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.402 106662 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.402 106662 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.402 106662 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.402 106662 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.402 106662 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.403 106662 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.403 106662 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.403 106662 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.404 106662 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.404 106662 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.404 106662 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.405 106662 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.405 106662 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.405 106662 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.406 106662 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.406 106662 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.406 106662 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.407 106662 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.407 106662 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.408 106662 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.408 106662 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.408 106662 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.408 106662 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.408 106662 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.408 106662 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.409 106662 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.409 106662 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.409 106662 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.409 106662 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.409 106662 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.409 106662 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.409 106662 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.410 106662 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.410 106662 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.410 106662 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.411 106662 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.411 106662 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.411 106662 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.411 106662 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.411 106662 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.411 106662 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.411 106662 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.412 106662 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.412 106662 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.412 106662 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.412 106662 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.412 106662 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.412 106662 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.413 106662 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.413 106662 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.413 106662 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.413 106662 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.413 106662 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.413 106662 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.414 106662 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.414 106662 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.414 106662 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.414 106662 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.414 106662 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.414 106662 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.415 106662 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.415 106662 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.415 106662 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.415 106662 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.415 106662 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.415 106662 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.415 106662 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.416 106662 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.416 106662 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.416 106662 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.416 106662 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.416 106662 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.416 106662 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.416 106662 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.417 106662 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.417 106662 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.417 106662 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.417 106662 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.417 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.417 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.418 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.418 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.418 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.418 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.418 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.418 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.418 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.419 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.419 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.419 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.419 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.419 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.419 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.420 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.420 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.420 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.420 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.420 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.420 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.421 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.421 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.421 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.421 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.422 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.422 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.422 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.422 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.422 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.422 106662 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.423 106662 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.423 106662 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.423 106662 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.423 106662 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:14:06 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:14:06.424 106662 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 17:14:08 np0005541603 systemd-logind[788]: New session 22 of user zuul.
Dec  1 17:14:08 np0005541603 systemd[1]: Started Session 22 of User zuul.
Dec  1 17:14:10 np0005541603 python3.9[106928]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:14:11 np0005541603 python3.9[107084]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:12 np0005541603 python3.9[107249]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:14:12 np0005541603 systemd[1]: Reloading.
Dec  1 17:14:12 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:14:12 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:14:14 np0005541603 python3.9[107436]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:14:14 np0005541603 network[107453]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:14:14 np0005541603 network[107454]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:14:14 np0005541603 network[107455]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:14:18 np0005541603 python3.9[107716]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:19 np0005541603 python3.9[107869]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:20 np0005541603 python3.9[108022]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:21 np0005541603 python3.9[108175]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:22 np0005541603 python3.9[108328]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:23 np0005541603 python3.9[108481]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:24 np0005541603 python3.9[108634]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:14:25 np0005541603 python3.9[108787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:26 np0005541603 python3.9[108939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:27 np0005541603 python3.9[109091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:28 np0005541603 python3.9[109243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:28 np0005541603 python3.9[109395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:29 np0005541603 podman[109519]: 2025-12-01 22:14:29.501671428 +0000 UTC m=+0.141866995 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 17:14:29 np0005541603 python3.9[109562]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:30 np0005541603 python3.9[109726]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:32 np0005541603 python3.9[109878]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:32 np0005541603 podman[109959]: 2025-12-01 22:14:32.813435499 +0000 UTC m=+0.083585207 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:14:33 np0005541603 python3.9[110049]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:33 np0005541603 python3.9[110201]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:34 np0005541603 python3.9[110353]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:35 np0005541603 python3.9[110505]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:36 np0005541603 python3.9[110657]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:36 np0005541603 python3.9[110809]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:14:37 np0005541603 python3.9[110961]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:38 np0005541603 python3.9[111113]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:14:39 np0005541603 python3.9[111265]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:14:39 np0005541603 systemd[1]: Reloading.
Dec  1 17:14:39 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:14:39 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:14:40 np0005541603 python3.9[111452]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:41 np0005541603 python3.9[111605]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:43 np0005541603 python3.9[111758]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:44 np0005541603 python3.9[111911]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:45 np0005541603 python3.9[112064]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:45 np0005541603 python3.9[112217]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:46 np0005541603 python3.9[112370]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:14:47 np0005541603 python3.9[112523]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  1 17:14:48 np0005541603 python3.9[112676]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 17:14:50 np0005541603 python3.9[112834]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 17:14:51 np0005541603 python3.9[112994]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:14:52 np0005541603 python3.9[113078]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:14:59 np0005541603 podman[113100]: 2025-12-01 22:14:59.870709867 +0000 UTC m=+0.139131679 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 17:15:03 np0005541603 podman[113230]: 2025-12-01 22:15:03.828572803 +0000 UTC m=+0.090242413 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:15:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:15:04.586 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:15:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:15:04.587 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:15:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:15:04.587 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:15:20 np0005541603 kernel: SELinux:  Converting 2757 SID table entries...
Dec  1 17:15:20 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 17:15:20 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 17:15:20 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 17:15:20 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 17:15:20 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 17:15:20 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 17:15:20 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 17:15:29 np0005541603 kernel: SELinux:  Converting 2757 SID table entries...
Dec  1 17:15:29 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 17:15:29 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 17:15:29 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 17:15:29 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 17:15:29 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 17:15:29 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 17:15:29 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 17:15:30 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  1 17:15:30 np0005541603 podman[113338]: 2025-12-01 22:15:30.862458539 +0000 UTC m=+0.120300620 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:15:34 np0005541603 podman[113364]: 2025-12-01 22:15:34.799608671 +0000 UTC m=+0.068254988 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  1 17:16:01 np0005541603 podman[122490]: 2025-12-01 22:16:01.862088466 +0000 UTC m=+0.123399218 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 17:16:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:16:04.587 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:16:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:16:04.588 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:16:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:16:04.588 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:16:05 np0005541603 podman[124298]: 2025-12-01 22:16:05.777595939 +0000 UTC m=+0.054197343 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 17:16:30 np0005541603 kernel: SELinux:  Converting 2758 SID table entries...
Dec  1 17:16:30 np0005541603 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 17:16:30 np0005541603 kernel: SELinux:  policy capability open_perms=1
Dec  1 17:16:30 np0005541603 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 17:16:30 np0005541603 kernel: SELinux:  policy capability always_check_network=0
Dec  1 17:16:30 np0005541603 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 17:16:30 np0005541603 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 17:16:30 np0005541603 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 17:16:32 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  1 17:16:32 np0005541603 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  1 17:16:32 np0005541603 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  1 17:16:32 np0005541603 podman[130252]: 2025-12-01 22:16:32.23271385 +0000 UTC m=+0.161011328 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:16:36 np0005541603 podman[130340]: 2025-12-01 22:16:36.329430137 +0000 UTC m=+0.075574544 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 17:16:41 np0005541603 systemd[1]: Stopping OpenSSH server daemon...
Dec  1 17:16:41 np0005541603 systemd[1]: sshd.service: Deactivated successfully.
Dec  1 17:16:41 np0005541603 systemd[1]: Stopped OpenSSH server daemon.
Dec  1 17:16:41 np0005541603 systemd[1]: sshd.service: Consumed 4.027s CPU time, read 564.0K from disk, written 136.0K to disk.
Dec  1 17:16:41 np0005541603 systemd[1]: Stopped target sshd-keygen.target.
Dec  1 17:16:41 np0005541603 systemd[1]: Stopping sshd-keygen.target...
Dec  1 17:16:41 np0005541603 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 17:16:41 np0005541603 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 17:16:41 np0005541603 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 17:16:41 np0005541603 systemd[1]: Reached target sshd-keygen.target.
Dec  1 17:16:41 np0005541603 systemd[1]: Starting OpenSSH server daemon...
Dec  1 17:16:41 np0005541603 systemd[1]: Started OpenSSH server daemon.
Dec  1 17:16:43 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 17:16:43 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 17:16:44 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:44 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:44 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:44 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 17:16:48 np0005541603 python3.9[135048]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:16:48 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:48 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:48 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:49 np0005541603 python3.9[136117]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:16:49 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:49 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:49 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:51 np0005541603 python3.9[137115]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:16:51 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:51 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:51 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:52 np0005541603 python3.9[138189]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:16:52 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:52 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:52 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:53 np0005541603 python3.9[139382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:16:53 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:54 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:54 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:55 np0005541603 python3.9[140445]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:16:55 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:55 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:55 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:55 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 17:16:55 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 17:16:55 np0005541603 systemd[1]: man-db-cache-update.service: Consumed 14.763s CPU time.
Dec  1 17:16:55 np0005541603 systemd[1]: run-r760b1777bdb045e385dc508e82a8dc3e.service: Deactivated successfully.
Dec  1 17:16:56 np0005541603 python3.9[140955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:16:56 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:56 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:56 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:16:58 np0005541603 python3.9[141145]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:16:59 np0005541603 python3.9[141300]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:16:59 np0005541603 systemd[1]: Reloading.
Dec  1 17:16:59 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:16:59 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:17:00 np0005541603 python3.9[141491]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 17:17:00 np0005541603 systemd[1]: Reloading.
Dec  1 17:17:00 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:17:00 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:17:00 np0005541603 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  1 17:17:00 np0005541603 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  1 17:17:01 np0005541603 python3.9[141684]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:02 np0005541603 podman[141811]: 2025-12-01 22:17:02.664840733 +0000 UTC m=+0.141906043 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 17:17:02 np0005541603 python3.9[141858]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:03 np0005541603 python3.9[142019]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:17:04.588 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:17:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:17:04.588 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:17:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:17:04.588 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:17:04 np0005541603 python3.9[142174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:05 np0005541603 python3.9[142329]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:06 np0005541603 podman[142456]: 2025-12-01 22:17:06.603421196 +0000 UTC m=+0.090995453 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:17:06 np0005541603 python3.9[142503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:07 np0005541603 python3.9[142658]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:08 np0005541603 python3.9[142813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:09 np0005541603 python3.9[142968]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:10 np0005541603 python3.9[143123]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:12 np0005541603 python3.9[143278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:13 np0005541603 python3.9[143433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:14 np0005541603 python3.9[143588]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:16 np0005541603 python3.9[143743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 17:17:17 np0005541603 python3.9[143900]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:17:18 np0005541603 python3.9[144052]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:17:19 np0005541603 python3.9[144204]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:17:20 np0005541603 python3.9[144356]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:17:21 np0005541603 python3.9[144508]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:17:21 np0005541603 python3.9[144660]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:17:22 np0005541603 python3.9[144813]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:23 np0005541603 python3.9[144939]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627442.261259-554-22734558677276/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:24 np0005541603 python3.9[145091]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:25 np0005541603 python3.9[145216]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627444.058253-554-52047674924500/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:26 np0005541603 python3.9[145368]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:26 np0005541603 python3.9[145493]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627445.4789665-554-67092517176968/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:27 np0005541603 python3.9[145645]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:28 np0005541603 python3.9[145770]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627447.1247447-554-23479284596022/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:29 np0005541603 python3.9[145922]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:30 np0005541603 python3.9[146047]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627448.6535854-554-5071981798154/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:30 np0005541603 python3.9[146199]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:31 np0005541603 python3.9[146324]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627450.2544322-554-44587791336916/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:32 np0005541603 python3.9[146476]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:32 np0005541603 podman[146477]: 2025-12-01 22:17:32.851447214 +0000 UTC m=+0.120718778 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Dec  1 17:17:33 np0005541603 python3.9[146626]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627452.1066813-554-15244704018449/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:34 np0005541603 python3.9[146778]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:34 np0005541603 python3.9[146903]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764627453.5842373-554-25658591929541/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:35 np0005541603 python3.9[147055]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  1 17:17:36 np0005541603 python3.9[147208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:36 np0005541603 podman[147209]: 2025-12-01 22:17:36.812668147 +0000 UTC m=+0.083761723 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:17:37 np0005541603 python3.9[147379]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:38 np0005541603 python3.9[147531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:39 np0005541603 python3.9[147683]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:39 np0005541603 python3.9[147835]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:40 np0005541603 python3.9[147987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:41 np0005541603 python3.9[148139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:42 np0005541603 python3.9[148291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:43 np0005541603 python3.9[148443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:43 np0005541603 python3.9[148595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:44 np0005541603 python3.9[148747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:45 np0005541603 python3.9[148899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:45 np0005541603 python3.9[149051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:46 np0005541603 python3.9[149203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:47 np0005541603 python3.9[149355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:48 np0005541603 python3.9[149478]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627467.0504057-775-43086406260566/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:49 np0005541603 python3.9[149632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:49 np0005541603 python3.9[149755]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627468.5722094-775-47944909051385/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:50 np0005541603 python3.9[149907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:51 np0005541603 python3.9[150030]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627469.9997106-775-107505618234685/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:51 np0005541603 python3.9[150182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:52 np0005541603 python3.9[150305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627471.3897674-775-29524545455952/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:53 np0005541603 python3.9[150457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:54 np0005541603 python3.9[150580]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627472.7689586-775-109909188034723/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:54 np0005541603 python3.9[150732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:55 np0005541603 python3.9[150855]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627474.2763207-775-158526423820039/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:56 np0005541603 python3.9[151007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:56 np0005541603 python3.9[151130]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627475.6475632-775-1080192394715/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:57 np0005541603 python3.9[151282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:58 np0005541603 python3.9[151405]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627477.1717234-775-276585122917012/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:17:59 np0005541603 python3.9[151557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:17:59 np0005541603 python3.9[151681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627478.6526265-775-223665920845827/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:00 np0005541603 python3.9[151833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:01 np0005541603 python3.9[151956]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627480.190811-775-277111335588755/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:02 np0005541603 python3.9[152108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:03 np0005541603 python3.9[152231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627481.6852155-775-90676872919226/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:03 np0005541603 podman[152355]: 2025-12-01 22:18:03.732758576 +0000 UTC m=+0.170506893 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 17:18:03 np0005541603 python3.9[152403]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:04 np0005541603 python3.9[152533]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627483.222859-775-183578222375386/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:18:04.589 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:18:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:18:04.589 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:18:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:18:04.589 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:18:05 np0005541603 python3.9[152685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:06 np0005541603 python3.9[152808]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627484.792281-775-180729384212805/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:06 np0005541603 python3.9[152960]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:07 np0005541603 podman[153055]: 2025-12-01 22:18:07.378772772 +0000 UTC m=+0.078862507 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 17:18:07 np0005541603 python3.9[153102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627486.2745118-775-202626721056089/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:08 np0005541603 python3.9[153252]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:18:09 np0005541603 python3.9[153407]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  1 17:18:11 np0005541603 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  1 17:18:11 np0005541603 python3.9[153563]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:12 np0005541603 python3.9[153715]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:13 np0005541603 python3.9[153867]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:13 np0005541603 python3.9[154019]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:14 np0005541603 python3.9[154171]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:15 np0005541603 python3.9[154325]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:16 np0005541603 python3.9[154477]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:17 np0005541603 python3.9[154629]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:18 np0005541603 python3.9[154781]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:18 np0005541603 python3.9[154933]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:19 np0005541603 python3.9[155085]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:18:19 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:20 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:20 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:20 np0005541603 systemd[1]: Starting libvirt logging daemon socket...
Dec  1 17:18:20 np0005541603 systemd[1]: Listening on libvirt logging daemon socket.
Dec  1 17:18:20 np0005541603 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  1 17:18:20 np0005541603 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  1 17:18:20 np0005541603 systemd[1]: Starting libvirt logging daemon...
Dec  1 17:18:20 np0005541603 systemd[1]: Started libvirt logging daemon.
Dec  1 17:18:21 np0005541603 python3.9[155281]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:18:21 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:21 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:21 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:21 np0005541603 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  1 17:18:21 np0005541603 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  1 17:18:21 np0005541603 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  1 17:18:21 np0005541603 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  1 17:18:21 np0005541603 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  1 17:18:21 np0005541603 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  1 17:18:21 np0005541603 systemd[1]: Starting libvirt nodedev daemon...
Dec  1 17:18:21 np0005541603 systemd[1]: Started libvirt nodedev daemon.
Dec  1 17:18:22 np0005541603 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  1 17:18:22 np0005541603 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  1 17:18:22 np0005541603 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  1 17:18:22 np0005541603 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  1 17:18:22 np0005541603 python3.9[155498]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:18:22 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:22 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:22 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:23 np0005541603 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  1 17:18:23 np0005541603 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  1 17:18:23 np0005541603 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  1 17:18:23 np0005541603 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  1 17:18:23 np0005541603 systemd[1]: Starting libvirt proxy daemon...
Dec  1 17:18:23 np0005541603 systemd[1]: Started libvirt proxy daemon.
Dec  1 17:18:23 np0005541603 setroubleshoot[155373]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ac7cc444-4f41-413b-bba7-1987019fd646
Dec  1 17:18:23 np0005541603 setroubleshoot[155373]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 17:18:23 np0005541603 setroubleshoot[155373]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ac7cc444-4f41-413b-bba7-1987019fd646
Dec  1 17:18:23 np0005541603 setroubleshoot[155373]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 17:18:24 np0005541603 python3.9[155719]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:18:24 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:24 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:24 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:24 np0005541603 systemd[1]: Listening on libvirt locking daemon socket.
Dec  1 17:18:24 np0005541603 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  1 17:18:24 np0005541603 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  1 17:18:24 np0005541603 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  1 17:18:24 np0005541603 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  1 17:18:24 np0005541603 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  1 17:18:24 np0005541603 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  1 17:18:24 np0005541603 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  1 17:18:24 np0005541603 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  1 17:18:24 np0005541603 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  1 17:18:24 np0005541603 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 17:18:24 np0005541603 systemd[1]: Started libvirt QEMU daemon.
Dec  1 17:18:25 np0005541603 python3.9[155934]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:18:25 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:25 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:25 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:25 np0005541603 systemd[1]: Starting libvirt secret daemon socket...
Dec  1 17:18:25 np0005541603 systemd[1]: Listening on libvirt secret daemon socket.
Dec  1 17:18:25 np0005541603 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  1 17:18:25 np0005541603 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  1 17:18:25 np0005541603 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  1 17:18:25 np0005541603 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  1 17:18:25 np0005541603 systemd[1]: Starting libvirt secret daemon...
Dec  1 17:18:25 np0005541603 systemd[1]: Started libvirt secret daemon.
Dec  1 17:18:26 np0005541603 python3.9[156146]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:27 np0005541603 python3.9[156300]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:18:28 np0005541603 python3.9[156452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:29 np0005541603 python3.9[156575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627508.1089106-1120-199655125461966/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:30 np0005541603 python3.9[156727]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:31 np0005541603 python3.9[156879]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:31 np0005541603 python3.9[156957]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:32 np0005541603 python3.9[157109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:32 np0005541603 python3.9[157187]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.uo8dpgan recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:33 np0005541603 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  1 17:18:33 np0005541603 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.071s CPU time.
Dec  1 17:18:33 np0005541603 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  1 17:18:33 np0005541603 python3.9[157339]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:34 np0005541603 podman[157389]: 2025-12-01 22:18:34.299887757 +0000 UTC m=+0.166488089 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 17:18:34 np0005541603 python3.9[157430]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:35 np0005541603 python3.9[157596]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:18:36 np0005541603 python3[157749]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 17:18:37 np0005541603 python3.9[157901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:37 np0005541603 podman[157951]: 2025-12-01 22:18:37.663906886 +0000 UTC m=+0.063723705 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:18:37 np0005541603 python3.9[157999]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:38 np0005541603 python3.9[158151]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:39 np0005541603 python3.9[158229]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:40 np0005541603 python3.9[158381]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:40 np0005541603 python3.9[158459]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:41 np0005541603 python3.9[158611]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:42 np0005541603 python3.9[158689]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:42 np0005541603 python3.9[158841]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:43 np0005541603 python3.9[158966]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627522.3074841-1245-67335596327917/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:44 np0005541603 python3.9[159120]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:45 np0005541603 python3.9[159272]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:18:46 np0005541603 python3.9[159427]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:47 np0005541603 python3.9[159579]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:18:48 np0005541603 python3.9[159732]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:18:49 np0005541603 python3.9[159886]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:18:49 np0005541603 python3.9[160041]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:50 np0005541603 python3.9[160193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:51 np0005541603 python3.9[160316]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627530.1898293-1317-122909130868159/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:52 np0005541603 python3.9[160468]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:53 np0005541603 python3.9[160591]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627531.6583092-1332-171144348778891/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:53 np0005541603 python3.9[160743]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:18:54 np0005541603 python3.9[160866]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627533.2764857-1347-52202557014892/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:18:55 np0005541603 python3.9[161018]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:18:55 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:55 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:55 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:55 np0005541603 systemd[1]: Reached target edpm_libvirt.target.
Dec  1 17:18:56 np0005541603 python3.9[161210]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 17:18:57 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:57 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:57 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:57 np0005541603 systemd[1]: Reloading.
Dec  1 17:18:57 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:18:57 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:18:58 np0005541603 systemd-logind[788]: Session 22 logged out. Waiting for processes to exit.
Dec  1 17:18:58 np0005541603 systemd[1]: session-22.scope: Deactivated successfully.
Dec  1 17:18:58 np0005541603 systemd[1]: session-22.scope: Consumed 4min 576ms CPU time.
Dec  1 17:18:58 np0005541603 systemd-logind[788]: Removed session 22.
Dec  1 17:19:04 np0005541603 systemd-logind[788]: New session 23 of user zuul.
Dec  1 17:19:04 np0005541603 systemd[1]: Started Session 23 of User zuul.
Dec  1 17:19:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:19:04.590 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:19:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:19:04.591 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:19:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:19:04.591 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:19:04 np0005541603 podman[161363]: 2025-12-01 22:19:04.857370234 +0000 UTC m=+0.130778974 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 17:19:05 np0005541603 python3.9[161487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:19:06 np0005541603 python3.9[161641]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:19:06 np0005541603 network[161658]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:19:07 np0005541603 network[161659]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:19:07 np0005541603 network[161660]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:19:07 np0005541603 podman[161666]: 2025-12-01 22:19:07.948833006 +0000 UTC m=+0.085047094 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 17:19:12 np0005541603 python3.9[161951]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 17:19:13 np0005541603 python3.9[162035]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:19:20 np0005541603 python3.9[162190]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:19:21 np0005541603 python3.9[162342]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:19:22 np0005541603 python3.9[162495]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:19:23 np0005541603 python3.9[162647]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:19:24 np0005541603 python3.9[162800]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:19:25 np0005541603 python3.9[162923]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627563.5248697-95-170633793019387/.source.iscsi _original_basename=.wih1uph7 follow=False checksum=cd8a44776dab73f0715fb195725c4393a752c94c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:26 np0005541603 python3.9[163075]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:27 np0005541603 python3.9[163227]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:27 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:19:27 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:19:27 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:19:28 np0005541603 python3.9[163380]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:19:28 np0005541603 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  1 17:19:29 np0005541603 python3.9[163536]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:19:29 np0005541603 systemd[1]: Reloading.
Dec  1 17:19:29 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:19:29 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:19:29 np0005541603 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 17:19:29 np0005541603 systemd[1]: Starting Open-iSCSI...
Dec  1 17:19:29 np0005541603 kernel: Loading iSCSI transport class v2.0-870.
Dec  1 17:19:29 np0005541603 systemd[1]: Started Open-iSCSI.
Dec  1 17:19:29 np0005541603 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  1 17:19:29 np0005541603 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  1 17:19:31 np0005541603 python3.9[163736]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:19:32 np0005541603 network[163753]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:19:32 np0005541603 network[163754]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:19:32 np0005541603 network[163755]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:19:35 np0005541603 podman[163827]: 2025-12-01 22:19:35.740939977 +0000 UTC m=+0.121165628 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 17:19:37 np0005541603 python3.9[164054]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 17:19:38 np0005541603 podman[164178]: 2025-12-01 22:19:38.492101452 +0000 UTC m=+0.089082500 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 17:19:38 np0005541603 python3.9[164226]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  1 17:19:39 np0005541603 python3.9[164384]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:19:40 np0005541603 python3.9[164507]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627578.9548182-172-47994380322172/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:40 np0005541603 python3.9[164659]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:42 np0005541603 python3.9[164811]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:19:42 np0005541603 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 17:19:42 np0005541603 systemd[1]: Stopped Load Kernel Modules.
Dec  1 17:19:42 np0005541603 systemd[1]: Stopping Load Kernel Modules...
Dec  1 17:19:42 np0005541603 systemd[1]: Starting Load Kernel Modules...
Dec  1 17:19:42 np0005541603 systemd[1]: Finished Load Kernel Modules.
Dec  1 17:19:43 np0005541603 python3.9[164967]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:19:43 np0005541603 python3.9[165119]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:19:44 np0005541603 python3.9[165271]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:19:45 np0005541603 python3.9[165423]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:19:46 np0005541603 python3.9[165546]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627585.110232-230-268831630742545/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:47 np0005541603 python3.9[165698]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:19:47 np0005541603 python3.9[165851]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:48 np0005541603 python3.9[166003]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:49 np0005541603 python3.9[166155]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:50 np0005541603 python3.9[166307]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:51 np0005541603 python3.9[166459]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:52 np0005541603 python3.9[166611]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:53 np0005541603 python3.9[166763]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:53 np0005541603 python3.9[166915]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:19:54 np0005541603 python3.9[167069]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:19:55 np0005541603 python3.9[167221]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:19:56 np0005541603 python3.9[167373]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:19:57 np0005541603 python3.9[167451]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:19:58 np0005541603 python3.9[167603]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:19:58 np0005541603 python3.9[167681]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:19:59 np0005541603 python3.9[167833]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:00 np0005541603 python3.9[167985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:20:01 np0005541603 python3.9[168063]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:02 np0005541603 python3.9[168217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:20:02 np0005541603 python3.9[168295]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:03 np0005541603 python3.9[168447]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:03 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:03 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:03 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:20:04.591 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:20:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:20:04.593 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:20:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:20:04.593 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:20:04 np0005541603 python3.9[168635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:20:05 np0005541603 python3.9[168713]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:06 np0005541603 podman[168837]: 2025-12-01 22:20:06.183231807 +0000 UTC m=+0.192799459 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:20:06 np0005541603 python3.9[168886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:20:06 np0005541603 python3.9[168971]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:07 np0005541603 python3.9[169123]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:07 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:08 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:08 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:08 np0005541603 systemd[1]: Starting Create netns directory...
Dec  1 17:20:08 np0005541603 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 17:20:08 np0005541603 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 17:20:08 np0005541603 systemd[1]: Finished Create netns directory.
Dec  1 17:20:08 np0005541603 podman[169230]: 2025-12-01 22:20:08.825901471 +0000 UTC m=+0.091339230 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:20:09 np0005541603 python3.9[169336]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:20:10 np0005541603 python3.9[169488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:20:11 np0005541603 python3.9[169611]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627609.5094576-437-161731204709404/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:20:12 np0005541603 python3.9[169763]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:20:12 np0005541603 python3.9[169915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:20:13 np0005541603 python3.9[170038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627612.4138253-462-264343456216004/.source.json _original_basename=.7o6ozpw5 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:14 np0005541603 python3.9[170190]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:17 np0005541603 python3.9[170617]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  1 17:20:18 np0005541603 python3.9[170769]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:20:19 np0005541603 python3.9[170921]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 17:20:20 np0005541603 python3[171104]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:20:21 np0005541603 podman[171141]: 2025-12-01 22:20:21.093237807 +0000 UTC m=+0.085715549 container create a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 17:20:21 np0005541603 podman[171141]: 2025-12-01 22:20:21.051903402 +0000 UTC m=+0.044381204 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 17:20:21 np0005541603 python3[171104]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 17:20:21 np0005541603 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  1 17:20:22 np0005541603 python3.9[171333]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:20:23 np0005541603 python3.9[171487]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:23 np0005541603 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 17:20:23 np0005541603 python3.9[171564]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:20:24 np0005541603 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  1 17:20:24 np0005541603 python3.9[171715]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627623.8488235-550-62797273028397/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:25 np0005541603 python3.9[171792]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:20:25 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:25 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:25 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:25 np0005541603 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  1 17:20:26 np0005541603 python3.9[171904]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:26 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:26 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:26 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:26 np0005541603 systemd[1]: Starting multipathd container...
Dec  1 17:20:27 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:20:27 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27f285df6cf64940bc5d2658bfe109d4595f592bb235b0f345629e276530397/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 17:20:27 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27f285df6cf64940bc5d2658bfe109d4595f592bb235b0f345629e276530397/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 17:20:27 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.
Dec  1 17:20:27 np0005541603 podman[171944]: 2025-12-01 22:20:27.138808203 +0000 UTC m=+0.312976565 container init a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  1 17:20:27 np0005541603 multipathd[171959]: + sudo -E kolla_set_configs
Dec  1 17:20:27 np0005541603 podman[171944]: 2025-12-01 22:20:27.173397645 +0000 UTC m=+0.347565937 container start a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:20:27 np0005541603 podman[171944]: multipathd
Dec  1 17:20:27 np0005541603 systemd[1]: Started multipathd container.
Dec  1 17:20:27 np0005541603 multipathd[171959]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:20:27 np0005541603 multipathd[171959]: INFO:__main__:Validating config file
Dec  1 17:20:27 np0005541603 multipathd[171959]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:20:27 np0005541603 multipathd[171959]: INFO:__main__:Writing out command to execute
Dec  1 17:20:27 np0005541603 multipathd[171959]: ++ cat /run_command
Dec  1 17:20:27 np0005541603 multipathd[171959]: + CMD='/usr/sbin/multipathd -d'
Dec  1 17:20:27 np0005541603 multipathd[171959]: + ARGS=
Dec  1 17:20:27 np0005541603 multipathd[171959]: + sudo kolla_copy_cacerts
Dec  1 17:20:27 np0005541603 podman[171966]: 2025-12-01 22:20:27.278248122 +0000 UTC m=+0.080266693 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 17:20:27 np0005541603 multipathd[171959]: Running command: '/usr/sbin/multipathd -d'
Dec  1 17:20:27 np0005541603 multipathd[171959]: + [[ ! -n '' ]]
Dec  1 17:20:27 np0005541603 multipathd[171959]: + . kolla_extend_start
Dec  1 17:20:27 np0005541603 multipathd[171959]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 17:20:27 np0005541603 multipathd[171959]: + umask 0022
Dec  1 17:20:27 np0005541603 multipathd[171959]: + exec /usr/sbin/multipathd -d
Dec  1 17:20:27 np0005541603 systemd[1]: a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8-458225d270b8585d.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:20:27 np0005541603 systemd[1]: a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8-458225d270b8585d.service: Failed with result 'exit-code'.
Dec  1 17:20:27 np0005541603 multipathd[171959]: 3136.916486 | --------start up--------
Dec  1 17:20:27 np0005541603 multipathd[171959]: 3136.916506 | read /etc/multipath.conf
Dec  1 17:20:27 np0005541603 multipathd[171959]: 3136.924813 | path checkers start up
Dec  1 17:20:27 np0005541603 python3.9[172147]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:20:28 np0005541603 python3.9[172301]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:20:29 np0005541603 python3.9[172466]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:20:29 np0005541603 systemd[1]: Stopping multipathd container...
Dec  1 17:20:29 np0005541603 multipathd[171959]: 3139.570053 | exit (signal)
Dec  1 17:20:29 np0005541603 multipathd[171959]: 3139.570186 | --------shut down-------
Dec  1 17:20:29 np0005541603 systemd[1]: libpod-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope: Deactivated successfully.
Dec  1 17:20:29 np0005541603 podman[172470]: 2025-12-01 22:20:29.988586298 +0000 UTC m=+0.090562368 container died a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 17:20:30 np0005541603 systemd[1]: a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8-458225d270b8585d.timer: Deactivated successfully.
Dec  1 17:20:30 np0005541603 systemd[1]: Stopped /usr/bin/podman healthcheck run a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.
Dec  1 17:20:30 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8-userdata-shm.mount: Deactivated successfully.
Dec  1 17:20:30 np0005541603 systemd[1]: var-lib-containers-storage-overlay-c27f285df6cf64940bc5d2658bfe109d4595f592bb235b0f345629e276530397-merged.mount: Deactivated successfully.
Dec  1 17:20:30 np0005541603 podman[172470]: 2025-12-01 22:20:30.04897247 +0000 UTC m=+0.150948500 container cleanup a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 17:20:30 np0005541603 podman[172470]: multipathd
Dec  1 17:20:30 np0005541603 podman[172496]: multipathd
Dec  1 17:20:30 np0005541603 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  1 17:20:30 np0005541603 systemd[1]: Stopped multipathd container.
Dec  1 17:20:30 np0005541603 systemd[1]: Starting multipathd container...
Dec  1 17:20:30 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:20:30 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27f285df6cf64940bc5d2658bfe109d4595f592bb235b0f345629e276530397/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 17:20:30 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27f285df6cf64940bc5d2658bfe109d4595f592bb235b0f345629e276530397/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 17:20:30 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.
Dec  1 17:20:30 np0005541603 podman[172509]: 2025-12-01 22:20:30.275455523 +0000 UTC m=+0.123087511 container init a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 17:20:30 np0005541603 multipathd[172524]: + sudo -E kolla_set_configs
Dec  1 17:20:30 np0005541603 podman[172509]: 2025-12-01 22:20:30.305050101 +0000 UTC m=+0.152682049 container start a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 17:20:30 np0005541603 podman[172509]: multipathd
Dec  1 17:20:30 np0005541603 systemd[1]: Started multipathd container.
Dec  1 17:20:30 np0005541603 multipathd[172524]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:20:30 np0005541603 multipathd[172524]: INFO:__main__:Validating config file
Dec  1 17:20:30 np0005541603 multipathd[172524]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:20:30 np0005541603 multipathd[172524]: INFO:__main__:Writing out command to execute
Dec  1 17:20:30 np0005541603 multipathd[172524]: ++ cat /run_command
Dec  1 17:20:30 np0005541603 multipathd[172524]: + CMD='/usr/sbin/multipathd -d'
Dec  1 17:20:30 np0005541603 multipathd[172524]: + ARGS=
Dec  1 17:20:30 np0005541603 multipathd[172524]: + sudo kolla_copy_cacerts
Dec  1 17:20:30 np0005541603 multipathd[172524]: Running command: '/usr/sbin/multipathd -d'
Dec  1 17:20:30 np0005541603 multipathd[172524]: + [[ ! -n '' ]]
Dec  1 17:20:30 np0005541603 multipathd[172524]: + . kolla_extend_start
Dec  1 17:20:30 np0005541603 multipathd[172524]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 17:20:30 np0005541603 multipathd[172524]: + umask 0022
Dec  1 17:20:30 np0005541603 multipathd[172524]: + exec /usr/sbin/multipathd -d
Dec  1 17:20:30 np0005541603 podman[172531]: 2025-12-01 22:20:30.426789892 +0000 UTC m=+0.107106132 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 17:20:30 np0005541603 multipathd[172524]: 3140.058973 | --------start up--------
Dec  1 17:20:30 np0005541603 multipathd[172524]: 3140.059004 | read /etc/multipath.conf
Dec  1 17:20:30 np0005541603 systemd[1]: a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8-d748dbfa6e0dd5e.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:20:30 np0005541603 systemd[1]: a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8-d748dbfa6e0dd5e.service: Failed with result 'exit-code'.
Dec  1 17:20:30 np0005541603 multipathd[172524]: 3140.066638 | path checkers start up
Dec  1 17:20:31 np0005541603 python3.9[172716]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:32 np0005541603 python3.9[172868]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 17:20:33 np0005541603 python3.9[173020]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  1 17:20:33 np0005541603 kernel: Key type psk registered
Dec  1 17:20:34 np0005541603 python3.9[173182]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:20:35 np0005541603 python3.9[173305]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627633.6428065-630-206947183621994/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:35 np0005541603 python3.9[173457]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:36 np0005541603 podman[173581]: 2025-12-01 22:20:36.713088282 +0000 UTC m=+0.156443507 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 17:20:36 np0005541603 python3.9[173626]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:20:37 np0005541603 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 17:20:37 np0005541603 systemd[1]: Stopped Load Kernel Modules.
Dec  1 17:20:37 np0005541603 systemd[1]: Stopping Load Kernel Modules...
Dec  1 17:20:37 np0005541603 systemd[1]: Starting Load Kernel Modules...
Dec  1 17:20:37 np0005541603 systemd[1]: Finished Load Kernel Modules.
Dec  1 17:20:38 np0005541603 python3.9[173791]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 17:20:39 np0005541603 podman[173795]: 2025-12-01 22:20:39.817089044 +0000 UTC m=+0.075658251 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 17:20:40 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:40 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:40 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:40 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:41 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:41 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:41 np0005541603 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 17:20:41 np0005541603 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 17:20:41 np0005541603 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 17:20:41 np0005541603 systemd[1]: Starting man-db-cache-update.service...
Dec  1 17:20:41 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:41 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:41 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:41 np0005541603 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 17:20:43 np0005541603 python3.9[175170]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:20:43 np0005541603 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 17:20:43 np0005541603 systemd[1]: Finished man-db-cache-update.service.
Dec  1 17:20:43 np0005541603 systemd[1]: man-db-cache-update.service: Consumed 2.009s CPU time.
Dec  1 17:20:43 np0005541603 systemd[1]: run-rab502ecb802347f6ae0ec82fe12ca25b.service: Deactivated successfully.
Dec  1 17:20:43 np0005541603 systemd[1]: Stopping Open-iSCSI...
Dec  1 17:20:43 np0005541603 iscsid[163576]: iscsid shutting down.
Dec  1 17:20:43 np0005541603 systemd[1]: iscsid.service: Deactivated successfully.
Dec  1 17:20:43 np0005541603 systemd[1]: Stopped Open-iSCSI.
Dec  1 17:20:43 np0005541603 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 17:20:43 np0005541603 systemd[1]: Starting Open-iSCSI...
Dec  1 17:20:43 np0005541603 systemd[1]: Started Open-iSCSI.
Dec  1 17:20:44 np0005541603 python3.9[175418]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:20:45 np0005541603 python3.9[175574]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:20:46 np0005541603 python3.9[175726]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:20:46 np0005541603 systemd[1]: Reloading.
Dec  1 17:20:46 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:20:46 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:20:47 np0005541603 python3.9[175911]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:20:47 np0005541603 network[175928]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:20:47 np0005541603 network[175929]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:20:47 np0005541603 network[175930]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:20:53 np0005541603 python3.9[176204]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:54 np0005541603 python3.9[176357]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:55 np0005541603 python3.9[176510]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:56 np0005541603 python3.9[176663]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:57 np0005541603 python3.9[176816]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:58 np0005541603 python3.9[176969]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:20:59 np0005541603 python3.9[177122]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:21:00 np0005541603 podman[177247]: 2025-12-01 22:21:00.574795705 +0000 UTC m=+0.076963489 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 17:21:00 np0005541603 python3.9[177295]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:21:01 np0005541603 python3.9[177448]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:02 np0005541603 python3.9[177600]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:03 np0005541603 python3.9[177752]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:04 np0005541603 python3.9[177904]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:21:04.591 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:21:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:21:04.592 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:21:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:21:04.593 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:21:05 np0005541603 python3.9[178056]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:06 np0005541603 python3.9[178208]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:06 np0005541603 python3.9[178360]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:07 np0005541603 podman[178484]: 2025-12-01 22:21:07.761912132 +0000 UTC m=+0.163577572 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Dec  1 17:21:07 np0005541603 python3.9[178531]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:08 np0005541603 python3.9[178691]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:09 np0005541603 python3.9[178843]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:10 np0005541603 podman[178967]: 2025-12-01 22:21:10.111275441 +0000 UTC m=+0.095008484 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 17:21:10 np0005541603 python3.9[179007]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:11 np0005541603 python3.9[179165]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:11 np0005541603 python3.9[179317]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:12 np0005541603 python3.9[179469]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:13 np0005541603 python3.9[179621]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:14 np0005541603 python3.9[179773]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:15 np0005541603 python3.9[179925]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:16 np0005541603 python3.9[180077]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:21:17 np0005541603 python3.9[180229]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:21:17 np0005541603 systemd[1]: Reloading.
Dec  1 17:21:17 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:21:17 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:21:18 np0005541603 python3.9[180416]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:19 np0005541603 python3.9[180569]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:19 np0005541603 python3.9[180722]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:20 np0005541603 python3.9[180875]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:21 np0005541603 python3.9[181028]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:22 np0005541603 python3.9[181181]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:22 np0005541603 python3.9[181334]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:23 np0005541603 python3.9[181487]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:21:25 np0005541603 python3.9[181640]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:26 np0005541603 python3.9[181792]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:27 np0005541603 python3.9[181944]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:28 np0005541603 python3.9[182096]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:29 np0005541603 python3.9[182248]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:29 np0005541603 python3.9[182402]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:30 np0005541603 python3.9[182554]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:30 np0005541603 podman[182555]: 2025-12-01 22:21:30.836423865 +0000 UTC m=+0.103299061 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd)
Dec  1 17:21:31 np0005541603 python3.9[182727]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:32 np0005541603 python3.9[182879]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:33 np0005541603 python3.9[183031]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:38 np0005541603 podman[183155]: 2025-12-01 22:21:38.129538788 +0000 UTC m=+0.140422660 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  1 17:21:38 np0005541603 python3.9[183201]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  1 17:21:39 np0005541603 python3.9[183363]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 17:21:40 np0005541603 podman[183521]: 2025-12-01 22:21:40.285586897 +0000 UTC m=+0.089983861 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 17:21:40 np0005541603 python3.9[183522]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 17:21:41 np0005541603 systemd-logind[788]: New session 24 of user zuul.
Dec  1 17:21:41 np0005541603 systemd[1]: Started Session 24 of User zuul.
Dec  1 17:21:41 np0005541603 systemd[1]: session-24.scope: Deactivated successfully.
Dec  1 17:21:41 np0005541603 systemd-logind[788]: Session 24 logged out. Waiting for processes to exit.
Dec  1 17:21:41 np0005541603 systemd-logind[788]: Removed session 24.
Dec  1 17:21:42 np0005541603 python3.9[183726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:43 np0005541603 python3.9[183847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627701.9636252-1229-146560188440510/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:44 np0005541603 python3.9[183997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:44 np0005541603 python3.9[184073]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:45 np0005541603 python3.9[184223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:46 np0005541603 python3.9[184344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627704.8391373-1229-118585193984645/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:46 np0005541603 python3.9[184494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:47 np0005541603 python3.9[184615]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627706.2981215-1229-37286089257295/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:48 np0005541603 python3.9[184765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:48 np0005541603 python3.9[184886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627707.7209-1229-75488032192881/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:49 np0005541603 python3.9[185036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:50 np0005541603 python3.9[185157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627709.0274298-1229-122690581371816/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:51 np0005541603 python3.9[185309]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:52 np0005541603 python3.9[185461]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:21:52 np0005541603 python3.9[185613]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:21:53 np0005541603 python3.9[185765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:54 np0005541603 python3.9[185888]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764627713.1687586-1336-98463626083876/.source _original_basename=.3j9prl05 follow=False checksum=51b0849589eb81e4463b2da105c1d365fc47a105 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  1 17:21:55 np0005541603 python3.9[186040]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:21:56 np0005541603 python3.9[186192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:57 np0005541603 python3.9[186313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627715.7667263-1362-121105459016972/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:57 np0005541603 python3.9[186463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:21:58 np0005541603 python3.9[186584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627717.3633475-1377-188094573286632/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:21:59 np0005541603 python3.9[186736]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  1 17:22:00 np0005541603 python3.9[186888]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:22:01 np0005541603 podman[187012]: 2025-12-01 22:22:01.433262628 +0000 UTC m=+0.105733847 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 17:22:01 np0005541603 python3[187059]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:22:02 np0005541603 podman[187097]: 2025-12-01 22:22:02.015817577 +0000 UTC m=+0.089842854 container create 458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, managed_by=edpm_ansible)
Dec  1 17:22:02 np0005541603 podman[187097]: 2025-12-01 22:22:01.972172031 +0000 UTC m=+0.046197338 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 17:22:02 np0005541603 python3[187059]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  1 17:22:03 np0005541603 python3.9[187287]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:03 np0005541603 python3.9[187441]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  1 17:22:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:22:04.592 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:22:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:22:04.593 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:22:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:22:04.593 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:22:04 np0005541603 python3.9[187593]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:22:06 np0005541603 python3[187745]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:22:06 np0005541603 podman[187783]: 2025-12-01 22:22:06.328479123 +0000 UTC m=+0.078408998 container create 3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:22:06 np0005541603 podman[187783]: 2025-12-01 22:22:06.29613222 +0000 UTC m=+0.046062105 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 17:22:06 np0005541603 python3[187745]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  1 17:22:07 np0005541603 python3.9[187973]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:08 np0005541603 podman[188127]: 2025-12-01 22:22:08.323601767 +0000 UTC m=+0.106957652 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:22:08 np0005541603 python3.9[188128]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:09 np0005541603 python3.9[188304]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627728.5384336-1469-247825466533485/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:09 np0005541603 python3.9[188380]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:22:09 np0005541603 systemd[1]: Reloading.
Dec  1 17:22:10 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:22:10 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:22:10 np0005541603 podman[188463]: 2025-12-01 22:22:10.721728538 +0000 UTC m=+0.101824966 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 17:22:11 np0005541603 python3.9[188510]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:22:11 np0005541603 systemd[1]: Reloading.
Dec  1 17:22:11 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:22:11 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:22:11 np0005541603 systemd[1]: Starting nova_compute container...
Dec  1 17:22:11 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:22:11 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:11 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:11 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:11 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:11 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:11 np0005541603 podman[188551]: 2025-12-01 22:22:11.566886458 +0000 UTC m=+0.109261318 container init 3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  1 17:22:11 np0005541603 podman[188551]: 2025-12-01 22:22:11.581221317 +0000 UTC m=+0.123596117 container start 3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 17:22:11 np0005541603 podman[188551]: nova_compute
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + sudo -E kolla_set_configs
Dec  1 17:22:11 np0005541603 systemd[1]: Started nova_compute container.
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Validating config file
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying service configuration files
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Deleting /etc/ceph
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Creating directory /etc/ceph
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Writing out command to execute
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:11 np0005541603 nova_compute[188566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 17:22:11 np0005541603 nova_compute[188566]: ++ cat /run_command
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + CMD=nova-compute
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + ARGS=
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + sudo kolla_copy_cacerts
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + [[ ! -n '' ]]
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + . kolla_extend_start
Dec  1 17:22:11 np0005541603 nova_compute[188566]: Running command: 'nova-compute'
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + umask 0022
Dec  1 17:22:11 np0005541603 nova_compute[188566]: + exec nova-compute
Dec  1 17:22:12 np0005541603 python3.9[188728]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:13 np0005541603 python3.9[188878]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:13 np0005541603 nova_compute[188566]: 2025-12-01 22:22:13.701 188570 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 17:22:13 np0005541603 nova_compute[188566]: 2025-12-01 22:22:13.702 188570 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 17:22:13 np0005541603 nova_compute[188566]: 2025-12-01 22:22:13.702 188570 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 17:22:13 np0005541603 nova_compute[188566]: 2025-12-01 22:22:13.702 188570 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 17:22:13 np0005541603 nova_compute[188566]: 2025-12-01 22:22:13.845 188570 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 17:22:13 np0005541603 nova_compute[188566]: 2025-12-01 22:22:13.874 188570 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 17:22:13 np0005541603 nova_compute[188566]: 2025-12-01 22:22:13.875 188570 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 17:22:14 np0005541603 python3.9[189032]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.670 188570 INFO nova.virt.driver [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.824 188570 INFO nova.compute.provider_config [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.849 188570 DEBUG oslo_concurrency.lockutils [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.850 188570 DEBUG oslo_concurrency.lockutils [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.850 188570 DEBUG oslo_concurrency.lockutils [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.851 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.851 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.851 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.852 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.852 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.852 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.852 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.853 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.853 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.853 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.853 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.854 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.854 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.854 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.854 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.854 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.855 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.855 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.855 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.855 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.855 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.856 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.856 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.856 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.856 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.857 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.857 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.857 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.857 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.858 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.858 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.858 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.858 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.859 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.859 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.859 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.859 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.859 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.860 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.860 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.861 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.861 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.861 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.862 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.862 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.862 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.862 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.862 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.863 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.863 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.863 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.863 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.864 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.864 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.864 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.864 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.864 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.865 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.865 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.865 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.866 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.866 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.866 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.866 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.867 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.867 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.867 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.867 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.868 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.868 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.868 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.868 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.869 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.869 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.869 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.869 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.870 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.870 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.870 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.870 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.871 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.871 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.871 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.871 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.871 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.872 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.872 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.872 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.872 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.872 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.873 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.873 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.873 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.873 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.873 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.874 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.874 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.874 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.874 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.874 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.875 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.875 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.875 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.875 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.875 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.876 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.876 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.876 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.876 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.877 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.877 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.877 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.877 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.878 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.878 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.878 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.878 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.879 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.879 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.879 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.879 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.879 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.880 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.880 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.880 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.880 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.880 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.881 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.881 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.881 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.881 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.882 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.882 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.882 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.882 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.883 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.883 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.883 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.883 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.883 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.884 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.884 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.884 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.884 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.884 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.885 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.885 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.885 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.885 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.886 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.886 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.886 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.886 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.886 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.887 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.887 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.887 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.887 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.888 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.888 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.888 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.889 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.889 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.889 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.889 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.890 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.890 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.890 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.890 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.890 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.891 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.891 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.891 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.891 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.891 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.892 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.892 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.892 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.892 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.893 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.893 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.893 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.893 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.893 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.894 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.894 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.894 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.894 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.894 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.895 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.895 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.895 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.895 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.896 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.896 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.896 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.896 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.896 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.897 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.897 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.897 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.897 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.897 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.897 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.897 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.898 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.898 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.898 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.898 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.898 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.898 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.898 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.899 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.899 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.899 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.899 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.899 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.899 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.900 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.900 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.900 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.900 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.900 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.900 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.901 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.901 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.901 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.901 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.901 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.902 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.902 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.902 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.902 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.902 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.902 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.902 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.903 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.903 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.903 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.903 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.903 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.903 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.903 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.904 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.904 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.904 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.904 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.904 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.904 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.904 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.905 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.905 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.905 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.905 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.905 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.905 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.905 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.906 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.906 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.906 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.906 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.906 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.906 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.906 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.907 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.907 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.907 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.907 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.907 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.907 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.907 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.908 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.908 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.908 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.908 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.908 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.908 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.908 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.909 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.909 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.909 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.909 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.909 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.909 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.910 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.910 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.910 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.910 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.910 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.910 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.910 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.911 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.911 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.911 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.911 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.911 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.911 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.912 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.912 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.912 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.912 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.912 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.912 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.913 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.913 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.913 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.913 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.913 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.913 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.913 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.914 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.914 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.914 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.914 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.914 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.914 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.915 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.915 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.915 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.915 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.915 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.916 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.916 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.916 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.916 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.916 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.917 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.917 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.917 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.917 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.917 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.918 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.918 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.918 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.918 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.918 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.919 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.919 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.919 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.919 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.919 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.919 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.920 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.920 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.920 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.920 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.920 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.921 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.921 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.921 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.921 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.921 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.921 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.922 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.922 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.922 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.923 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.923 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.923 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.923 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.923 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.923 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.924 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.924 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.924 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.924 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.924 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.924 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.925 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.925 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.925 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.925 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.925 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.925 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.925 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.926 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.926 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.926 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.926 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.926 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.926 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.927 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.927 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.927 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.927 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.927 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.927 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.928 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.928 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.928 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.928 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.928 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.928 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.929 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.929 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.929 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.929 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.929 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.929 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.929 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.930 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.930 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.930 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.930 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.930 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.930 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.931 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.931 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.931 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.931 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.931 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.931 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.931 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.932 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.932 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.932 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.932 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.932 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.932 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.932 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.933 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.934 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.934 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.934 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.934 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.934 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.934 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.934 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.935 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.935 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.935 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.935 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.935 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.935 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.935 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.936 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.936 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.936 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.936 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.936 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.936 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.937 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.937 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.937 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.937 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.937 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.937 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.938 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.939 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.939 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.939 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.939 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.939 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.939 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.940 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.940 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.940 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.940 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.940 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.940 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.941 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.941 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.941 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.941 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.941 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.941 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.942 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.942 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.942 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.942 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.942 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.942 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.942 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.943 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.943 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.943 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.943 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.943 188570 WARNING oslo_config.cfg [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 17:22:14 np0005541603 nova_compute[188566]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 17:22:14 np0005541603 nova_compute[188566]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 17:22:14 np0005541603 nova_compute[188566]: and ``live_migration_inbound_addr`` respectively.
Dec  1 17:22:14 np0005541603 nova_compute[188566]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.944 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.944 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.944 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.944 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.944 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.944 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.945 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.945 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.945 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.945 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.945 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.945 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.945 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.946 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.946 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.946 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.946 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.946 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.946 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.946 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.947 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.947 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.947 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.947 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.947 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.947 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.947 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.948 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.948 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.948 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.948 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.948 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.948 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.949 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.950 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.950 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.950 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.950 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.950 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.950 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.950 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.951 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.951 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.951 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.951 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.951 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.951 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.951 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.952 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.952 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.952 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.952 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.952 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.952 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.952 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.953 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.953 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.953 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.953 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.953 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.953 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.953 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.954 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.954 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.954 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.954 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.954 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.954 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.954 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.955 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.955 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.955 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.955 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.955 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.955 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.955 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.956 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.956 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.956 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.956 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.956 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.956 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.956 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.957 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.957 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.957 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.957 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.957 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.957 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.957 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.958 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.958 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.958 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.958 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.958 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.958 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.958 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.959 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.959 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.959 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.959 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.959 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.959 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.959 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.960 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.960 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.960 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.960 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.960 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.960 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.960 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.961 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.961 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.961 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.961 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.961 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.961 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.961 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.962 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.962 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.962 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.962 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.962 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.962 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.962 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.963 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.963 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.963 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.963 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.963 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.963 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.963 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.964 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.964 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.964 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.964 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.964 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.964 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.965 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.965 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.965 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.965 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.965 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.965 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.965 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.966 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.966 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.966 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.966 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.966 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.966 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.966 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.967 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.967 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.967 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.967 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.967 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.967 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.967 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.968 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.968 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.968 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.968 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.968 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.968 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.968 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.969 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.969 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.969 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.969 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.969 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.969 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.970 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.970 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.970 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.970 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.970 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.970 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.971 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.971 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.971 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.971 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.971 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.971 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.972 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.972 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.972 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.972 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.972 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.972 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.973 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.973 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.973 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.973 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.973 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.973 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.973 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.974 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.974 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.974 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.974 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.974 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.974 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.974 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.975 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.975 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.975 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.975 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.975 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.975 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.975 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.976 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.976 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.976 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.976 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.976 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.976 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.976 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.977 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.978 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.978 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.978 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.978 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.978 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.978 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.978 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.979 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.979 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.979 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.979 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.979 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.979 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.979 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.980 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.980 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.980 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.980 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.980 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.980 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.981 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.981 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.981 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.981 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.981 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.981 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.981 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.982 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.982 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.982 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.982 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.982 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.982 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.982 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.983 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.984 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.984 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.984 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.984 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.984 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.984 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.985 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.986 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.986 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.986 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.986 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.986 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.986 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.987 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.987 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.987 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.987 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.987 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.987 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.987 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.988 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.988 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.988 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.988 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.988 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.988 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.988 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.989 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.989 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.989 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.989 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.989 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.989 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.989 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.990 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.990 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.990 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.990 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.990 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.990 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.990 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.991 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.991 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.991 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.991 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.991 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.991 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.991 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.992 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.992 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.992 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.992 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.992 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.992 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.992 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.993 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.993 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.993 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.993 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.993 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.993 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.993 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.994 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.995 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.995 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.995 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.995 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.995 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.995 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.995 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.996 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.996 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.996 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.996 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.996 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.996 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.996 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.997 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.997 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.997 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.997 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.997 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.997 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.997 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.998 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.998 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.998 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.998 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.998 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.998 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.998 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.999 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.999 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.999 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.999 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.999 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:14 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.999 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:14.999 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.000 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.000 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.000 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.000 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.000 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.000 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.000 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.001 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.001 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.001 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.001 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.001 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.001 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.001 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.002 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.002 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.002 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.002 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.002 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.002 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.002 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.003 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.003 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.003 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.003 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.003 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.003 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.003 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.004 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.004 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.004 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.004 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.004 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.004 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.004 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.005 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.005 188570 DEBUG oslo_service.service [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.006 188570 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.021 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.021 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.022 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.022 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 17:22:15 np0005541603 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 17:22:15 np0005541603 systemd[1]: Started libvirt QEMU daemon.
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.119 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f59697f4820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.124 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f59697f4820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.125 188570 INFO nova.virt.libvirt.driver [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.153 188570 WARNING nova.virt.libvirt.driver [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 17:22:15 np0005541603 nova_compute[188566]: 2025-12-01 22:22:15.153 188570 DEBUG nova.virt.libvirt.volume.mount [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 17:22:15 np0005541603 python3.9[189236]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 17:22:15 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:22:15 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.090 188570 INFO nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <host>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <uuid>76dcf733-b3f8-4a52-82fd-91cdbadb534b</uuid>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <arch>x86_64</arch>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model>EPYC-Rome-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <vendor>AMD</vendor>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <microcode version='16777317'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <signature family='23' model='49' stepping='0'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='x2apic'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='tsc-deadline'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='osxsave'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='hypervisor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='tsc_adjust'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='spec-ctrl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='stibp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='arch-capabilities'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='cmp_legacy'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='topoext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='virt-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='lbrv'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='tsc-scale'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='vmcb-clean'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='pause-filter'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='pfthreshold'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='svme-addr-chk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='rdctl-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='mds-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature name='pschange-mc-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <pages unit='KiB' size='4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <pages unit='KiB' size='2048'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <pages unit='KiB' size='1048576'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <power_management>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <suspend_mem/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <suspend_disk/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <suspend_hybrid/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </power_management>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <iommu support='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <migration_features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <live/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <uri_transports>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <uri_transport>tcp</uri_transport>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <uri_transport>rdma</uri_transport>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </uri_transports>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </migration_features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <topology>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <cells num='1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <cell id='0'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          <memory unit='KiB'>7864316</memory>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          <pages unit='KiB' size='4'>1966079</pages>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          <distances>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <sibling id='0' value='10'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          </distances>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          <cpus num='8'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:          </cpus>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        </cell>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </cells>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </topology>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <cache>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </cache>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <secmodel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model>selinux</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <doi>0</doi>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </secmodel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <secmodel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model>dac</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <doi>0</doi>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </secmodel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </host>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <guest>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <os_type>hvm</os_type>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <arch name='i686'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <wordsize>32</wordsize>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <domain type='qemu'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <domain type='kvm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </arch>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <pae/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <nonpae/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <acpi default='on' toggle='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <apic default='on' toggle='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <cpuselection/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <deviceboot/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <disksnapshot default='on' toggle='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <externalSnapshot/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </guest>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <guest>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <os_type>hvm</os_type>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <arch name='x86_64'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <wordsize>64</wordsize>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <domain type='qemu'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <domain type='kvm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </arch>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <acpi default='on' toggle='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <apic default='on' toggle='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <cpuselection/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <deviceboot/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <disksnapshot default='on' toggle='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <externalSnapshot/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </guest>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 
Dec  1 17:22:16 np0005541603 nova_compute[188566]: </capabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: #033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.100 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.135 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 17:22:16 np0005541603 nova_compute[188566]: <domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <domain>kvm</domain>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <arch>i686</arch>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <vcpu max='240'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <iothreads supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <os supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='firmware'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <loader supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>rom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pflash</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='readonly'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>yes</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='secure'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </loader>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </os>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='maximumMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <vendor>AMD</vendor>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='succor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='custom' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-128'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-256'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-512'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <memoryBacking supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='sourceType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>anonymous</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>memfd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </memoryBacking>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <disk supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='diskDevice'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>disk</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cdrom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>floppy</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>lun</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ide</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>fdc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>sata</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </disk>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <graphics supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vnc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egl-headless</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </graphics>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <video supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='modelType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vga</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cirrus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>none</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>bochs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ramfb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </video>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hostdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='mode'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>subsystem</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='startupPolicy'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>mandatory</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>requisite</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>optional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='subsysType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pci</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='capsType'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='pciBackend'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hostdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <rng supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>random</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </rng>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <filesystem supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='driverType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>path</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>handle</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtiofs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </filesystem>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <tpm supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-tis</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-crb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emulator</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>external</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendVersion'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>2.0</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </tpm>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <redirdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </redirdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <channel supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </channel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <crypto supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </crypto>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <interface supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>passt</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </interface>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <panic supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>isa</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>hyperv</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </panic>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <console supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>null</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dev</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pipe</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stdio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>udp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tcp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu-vdagent</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </console>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <gic supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <genid supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backup supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <async-teardown supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <ps2 supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sev supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sgx supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hyperv supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='features'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>relaxed</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vapic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>spinlocks</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vpindex</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>runtime</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>synic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stimer</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reset</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vendor_id</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>frequencies</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reenlightenment</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tlbflush</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ipi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>avic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emsr_bitmap</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>xmm_input</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hyperv>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <launchSecurity supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='sectype'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tdx</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </launchSecurity>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: </domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.145 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 17:22:16 np0005541603 nova_compute[188566]: <domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <domain>kvm</domain>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <arch>i686</arch>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <vcpu max='4096'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <iothreads supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <os supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='firmware'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <loader supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>rom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pflash</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='readonly'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>yes</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='secure'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </loader>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </os>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='maximumMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <vendor>AMD</vendor>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='succor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='custom' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-128'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-256'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-512'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <memoryBacking supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='sourceType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>anonymous</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>memfd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </memoryBacking>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <disk supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='diskDevice'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>disk</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cdrom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>floppy</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>lun</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>fdc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>sata</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </disk>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <graphics supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vnc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egl-headless</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </graphics>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <video supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='modelType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vga</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cirrus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>none</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>bochs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ramfb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </video>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hostdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='mode'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>subsystem</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='startupPolicy'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>mandatory</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>requisite</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>optional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='subsysType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pci</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='capsType'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='pciBackend'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hostdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <rng supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>random</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </rng>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <filesystem supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='driverType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>path</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>handle</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtiofs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </filesystem>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <tpm supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-tis</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-crb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emulator</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>external</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendVersion'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>2.0</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </tpm>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <redirdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </redirdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <channel supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </channel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <crypto supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </crypto>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <interface supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>passt</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </interface>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <panic supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>isa</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>hyperv</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </panic>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <console supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>null</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dev</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pipe</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stdio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>udp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tcp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu-vdagent</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </console>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <gic supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <genid supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backup supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <async-teardown supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <ps2 supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sev supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sgx supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hyperv supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='features'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>relaxed</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vapic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>spinlocks</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vpindex</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>runtime</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>synic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stimer</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reset</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vendor_id</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>frequencies</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reenlightenment</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tlbflush</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ipi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>avic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emsr_bitmap</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>xmm_input</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hyperv>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <launchSecurity supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='sectype'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tdx</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </launchSecurity>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: </domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.197 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.202 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  1 17:22:16 np0005541603 nova_compute[188566]: <domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <domain>kvm</domain>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <arch>x86_64</arch>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <vcpu max='240'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <iothreads supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <os supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='firmware'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <loader supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>rom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pflash</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='readonly'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>yes</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='secure'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </loader>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </os>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='maximumMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <vendor>AMD</vendor>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='succor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='custom' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-128'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-256'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-512'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <memoryBacking supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='sourceType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>anonymous</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>memfd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </memoryBacking>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <disk supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='diskDevice'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>disk</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cdrom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>floppy</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>lun</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ide</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>fdc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>sata</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </disk>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <graphics supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vnc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egl-headless</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </graphics>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <video supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='modelType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vga</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cirrus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>none</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>bochs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ramfb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </video>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hostdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='mode'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>subsystem</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='startupPolicy'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>mandatory</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>requisite</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>optional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='subsysType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pci</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='capsType'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='pciBackend'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hostdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <rng supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>random</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </rng>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <filesystem supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='driverType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>path</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>handle</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtiofs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </filesystem>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <tpm supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-tis</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-crb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emulator</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>external</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendVersion'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>2.0</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </tpm>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <redirdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </redirdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <channel supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </channel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <crypto supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </crypto>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <interface supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>passt</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </interface>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <panic supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>isa</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>hyperv</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </panic>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <console supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>null</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dev</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pipe</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stdio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>udp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tcp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu-vdagent</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </console>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <gic supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <genid supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backup supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <async-teardown supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <ps2 supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sev supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sgx supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hyperv supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='features'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>relaxed</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vapic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>spinlocks</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vpindex</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>runtime</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>synic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stimer</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reset</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vendor_id</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>frequencies</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reenlightenment</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tlbflush</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ipi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>avic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emsr_bitmap</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>xmm_input</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hyperv>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <launchSecurity supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='sectype'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tdx</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </launchSecurity>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: </domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.281 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 17:22:16 np0005541603 nova_compute[188566]: <domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <domain>kvm</domain>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <arch>x86_64</arch>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <vcpu max='4096'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <iothreads supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <os supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='firmware'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>efi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <loader supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>rom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pflash</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='readonly'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>yes</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='secure'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>yes</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>no</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </loader>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </os>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='maximumMigratable'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>on</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>off</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <vendor>AMD</vendor>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='succor'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <mode name='custom' supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Denverton-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='auto-ibrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amd-psfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='stibp-always-on'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='EPYC-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-128'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-256'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx10-512'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='prefetchiti'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Haswell-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512er'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512pf'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fma4'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tbm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xop'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='amx-tile'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-bf16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-fp16'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bitalg'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrc'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fzrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='la57'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='taa-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xfd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ifma'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cmpccxadd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fbsdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='fsrs'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ibrs-all'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mcdt-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pbrsb-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='psdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='serialize'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vaes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='hle'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='rtm'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512bw'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512cd'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512dq'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512f'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='avx512vl'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='invpcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pcid'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='pku'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='mpx'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='core-capability'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='split-lock-detect'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='cldemote'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='erms'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='gfni'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdir64b'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='movdiri'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='xsaves'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='athlon-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='core2duo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='coreduo-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='n270-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='ss'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <blockers model='phenom-v1'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnow'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <feature name='3dnowext'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </blockers>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </mode>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </cpu>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <memoryBacking supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <enum name='sourceType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>anonymous</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <value>memfd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </memoryBacking>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <disk supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='diskDevice'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>disk</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cdrom</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>floppy</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>lun</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>fdc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>sata</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </disk>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <graphics supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vnc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egl-headless</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </graphics>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <video supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='modelType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vga</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>cirrus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>none</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>bochs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ramfb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </video>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hostdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='mode'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>subsystem</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='startupPolicy'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>mandatory</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>requisite</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>optional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='subsysType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pci</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>scsi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='capsType'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='pciBackend'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hostdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <rng supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtio-non-transitional</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>random</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>egd</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </rng>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <filesystem supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='driverType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>path</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>handle</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>virtiofs</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </filesystem>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <tpm supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-tis</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tpm-crb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emulator</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>external</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendVersion'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>2.0</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </tpm>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <redirdev supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='bus'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>usb</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </redirdev>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <channel supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </channel>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <crypto supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendModel'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>builtin</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </crypto>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <interface supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='backendType'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>default</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>passt</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </interface>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <panic supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='model'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>isa</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>hyperv</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </panic>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <console supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='type'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>null</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vc</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pty</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dev</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>file</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>pipe</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stdio</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>udp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tcp</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>unix</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>qemu-vdagent</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>dbus</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </console>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </devices>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  <features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <gic supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <genid supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <backup supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <async-teardown supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <ps2 supported='yes'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sev supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <sgx supported='no'/>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <hyperv supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='features'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>relaxed</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vapic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>spinlocks</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vpindex</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>runtime</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>synic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>stimer</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reset</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>vendor_id</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>frequencies</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>reenlightenment</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tlbflush</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>ipi</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>avic</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>emsr_bitmap</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>xmm_input</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </defaults>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </hyperv>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    <launchSecurity supported='yes'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      <enum name='sectype'>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:        <value>tdx</value>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:      </enum>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:    </launchSecurity>
Dec  1 17:22:16 np0005541603 nova_compute[188566]:  </features>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: </domainCapabilities>
Dec  1 17:22:16 np0005541603 nova_compute[188566]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.338 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.338 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.338 188570 DEBUG nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.339 188570 INFO nova.virt.libvirt.host [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Secure Boot support detected#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.340 188570 INFO nova.virt.libvirt.driver [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.341 188570 INFO nova.virt.libvirt.driver [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.351 188570 DEBUG nova.virt.libvirt.driver [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.405 188570 INFO nova.virt.node [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Determined node identity 4ec36104-0fe8-4c15-929c-861f303bb3ec from /var/lib/nova/compute_id#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.426 188570 WARNING nova.compute.manager [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Compute nodes ['4ec36104-0fe8-4c15-929c-861f303bb3ec'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.461 188570 INFO nova.compute.manager [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.490 188570 WARNING nova.compute.manager [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.491 188570 DEBUG oslo_concurrency.lockutils [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.491 188570 DEBUG oslo_concurrency.lockutils [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.491 188570 DEBUG oslo_concurrency.lockutils [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.491 188570 DEBUG nova.compute.resource_tracker [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 17:22:16 np0005541603 systemd[1]: Starting libvirt nodedev daemon...
Dec  1 17:22:16 np0005541603 systemd[1]: Started libvirt nodedev daemon.
Dec  1 17:22:16 np0005541603 python3.9[189424]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.846 188570 WARNING nova.virt.libvirt.driver [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.853 188570 DEBUG nova.compute.resource_tracker [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6080MB free_disk=72.42925643920898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.854 188570 DEBUG oslo_concurrency.lockutils [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.854 188570 DEBUG oslo_concurrency.lockutils [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.874 188570 WARNING nova.compute.resource_tracker [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] No compute node record for compute-0.ctlplane.example.com:4ec36104-0fe8-4c15-929c-861f303bb3ec: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 4ec36104-0fe8-4c15-929c-861f303bb3ec could not be found.#033[00m
Dec  1 17:22:16 np0005541603 systemd[1]: Stopping nova_compute container...
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.899 188570 INFO nova.compute.resource_tracker [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 4ec36104-0fe8-4c15-929c-861f303bb3ec#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.970 188570 DEBUG nova.compute.resource_tracker [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.971 188570 DEBUG nova.compute.resource_tracker [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.983 188570 DEBUG oslo_concurrency.lockutils [None req-d90bafaf-0ef8-47d2-9a33-edc12b7ce488 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.983 188570 DEBUG oslo_concurrency.lockutils [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.984 188570 DEBUG oslo_concurrency.lockutils [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 17:22:16 np0005541603 nova_compute[188566]: 2025-12-01 22:22:16.984 188570 DEBUG oslo_concurrency.lockutils [None req-25088219-d4ce-456d-9811-b5efabdc5f84 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 17:22:17 np0005541603 virtqemud[189130]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  1 17:22:17 np0005541603 virtqemud[189130]: hostname: compute-0
Dec  1 17:22:17 np0005541603 virtqemud[189130]: End of file while reading data: Input/output error
Dec  1 17:22:17 np0005541603 systemd[1]: libpod-3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057.scope: Deactivated successfully.
Dec  1 17:22:17 np0005541603 systemd[1]: libpod-3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057.scope: Consumed 3.408s CPU time.
Dec  1 17:22:17 np0005541603 podman[189450]: 2025-12-01 22:22:17.465656642 +0000 UTC m=+0.545720258 container died 3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:22:17 np0005541603 systemd[1]: var-lib-containers-storage-overlay-09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000-merged.mount: Deactivated successfully.
Dec  1 17:22:17 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057-userdata-shm.mount: Deactivated successfully.
Dec  1 17:22:17 np0005541603 podman[189450]: 2025-12-01 22:22:17.566716476 +0000 UTC m=+0.646780102 container cleanup 3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:22:17 np0005541603 podman[189450]: nova_compute
Dec  1 17:22:17 np0005541603 podman[189479]: nova_compute
Dec  1 17:22:17 np0005541603 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  1 17:22:17 np0005541603 systemd[1]: Stopped nova_compute container.
Dec  1 17:22:17 np0005541603 systemd[1]: Starting nova_compute container...
Dec  1 17:22:17 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:22:17 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:17 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:17 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:17 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:17 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bd7e4ad061c0458706850cdbd2c9d5b27b53c40b078472427116fac158d000/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:17 np0005541603 podman[189492]: 2025-12-01 22:22:17.785671431 +0000 UTC m=+0.104895533 container init 3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 17:22:17 np0005541603 podman[189492]: 2025-12-01 22:22:17.792331831 +0000 UTC m=+0.111555893 container start 3c9406d8bcc46f24b8b33e689719344d26b580d56ba4929a7e1cc6ae37ff5057 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 17:22:17 np0005541603 podman[189492]: nova_compute
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + sudo -E kolla_set_configs
Dec  1 17:22:17 np0005541603 systemd[1]: Started nova_compute container.
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Validating config file
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying service configuration files
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /etc/ceph
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Creating directory /etc/ceph
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Writing out command to execute
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:17 np0005541603 nova_compute[189508]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 17:22:17 np0005541603 nova_compute[189508]: ++ cat /run_command
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + CMD=nova-compute
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + ARGS=
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + sudo kolla_copy_cacerts
Dec  1 17:22:17 np0005541603 nova_compute[189508]: Running command: 'nova-compute'
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + [[ ! -n '' ]]
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + . kolla_extend_start
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + umask 0022
Dec  1 17:22:17 np0005541603 nova_compute[189508]: + exec nova-compute
Dec  1 17:22:18 np0005541603 python3.9[189671]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 17:22:18 np0005541603 systemd[1]: Started libpod-conmon-458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac.scope.
Dec  1 17:22:18 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:22:18 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9d7a55f31740dce216be35e062e4750d8ebae6b437f2757fb18f2a1c37cb27/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:18 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9d7a55f31740dce216be35e062e4750d8ebae6b437f2757fb18f2a1c37cb27/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:18 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9d7a55f31740dce216be35e062e4750d8ebae6b437f2757fb18f2a1c37cb27/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  1 17:22:18 np0005541603 podman[189697]: 2025-12-01 22:22:18.921134832 +0000 UTC m=+0.161480787 container init 458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 17:22:18 np0005541603 podman[189697]: 2025-12-01 22:22:18.930484759 +0000 UTC m=+0.170830694 container start 458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 17:22:18 np0005541603 python3.9[189671]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Applying nova statedir ownership
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  1 17:22:18 np0005541603 nova_compute_init[189718]: INFO:nova_statedir:Nova statedir ownership complete
Dec  1 17:22:18 np0005541603 systemd[1]: libpod-458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac.scope: Deactivated successfully.
Dec  1 17:22:19 np0005541603 podman[189719]: 2025-12-01 22:22:19.011936333 +0000 UTC m=+0.046162628 container died 458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 17:22:19 np0005541603 systemd[1]: var-lib-containers-storage-overlay-4a9d7a55f31740dce216be35e062e4750d8ebae6b437f2757fb18f2a1c37cb27-merged.mount: Deactivated successfully.
Dec  1 17:22:19 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac-userdata-shm.mount: Deactivated successfully.
Dec  1 17:22:19 np0005541603 podman[189730]: 2025-12-01 22:22:19.066483259 +0000 UTC m=+0.058356296 container cleanup 458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:22:19 np0005541603 systemd[1]: libpod-conmon-458c6944243be1ffc91527f738277158acf436291f357a38e7fd05be7960b4ac.scope: Deactivated successfully.
Dec  1 17:22:19 np0005541603 systemd-logind[788]: Session 23 logged out. Waiting for processes to exit.
Dec  1 17:22:19 np0005541603 systemd[1]: session-23.scope: Deactivated successfully.
Dec  1 17:22:19 np0005541603 systemd[1]: session-23.scope: Consumed 2min 21.169s CPU time.
Dec  1 17:22:19 np0005541603 systemd-logind[788]: Removed session 23.
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.005 189512 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.005 189512 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.006 189512 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.006 189512 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.160 189512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.188 189512 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.189 189512 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.676 189512 INFO nova.virt.driver [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.801 189512 INFO nova.compute.provider_config [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.818 189512 DEBUG oslo_concurrency.lockutils [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.819 189512 DEBUG oslo_concurrency.lockutils [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.819 189512 DEBUG oslo_concurrency.lockutils [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.819 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.820 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.820 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.820 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.820 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.820 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.821 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.821 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.821 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.821 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.821 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.822 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.822 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.822 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.822 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.822 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.823 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.823 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.823 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.823 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.823 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.823 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.824 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.824 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.824 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.824 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.824 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.825 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.825 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.825 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.825 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.825 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.826 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.826 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.826 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.826 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.826 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.827 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.827 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.827 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.827 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.827 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.828 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.828 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.828 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.828 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.828 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.829 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.829 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.829 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.829 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.829 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.830 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.830 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.830 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.830 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.831 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.831 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.831 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.831 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.831 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.831 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.832 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.832 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.832 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.832 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.832 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.832 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.833 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.833 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.833 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.833 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.833 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.834 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.834 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.834 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.834 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.834 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.835 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.835 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.835 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.835 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.835 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.836 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.836 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.836 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.836 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.836 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.836 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.837 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.837 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.837 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.837 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.837 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.838 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.838 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.838 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.838 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.838 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.838 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.839 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.839 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.839 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.839 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.839 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.840 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.840 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.840 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.840 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.840 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.840 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.841 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.841 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.841 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.841 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.841 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.842 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.842 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.842 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.842 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.842 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.842 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.843 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.843 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.843 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.843 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.843 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.843 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.843 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.844 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.845 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.845 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.845 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.845 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.845 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.845 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.846 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.846 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.846 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.846 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.846 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.846 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.847 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.847 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.847 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.847 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.847 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.847 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.847 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.848 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.848 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.848 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.848 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.848 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.848 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.848 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.849 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.849 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.849 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.849 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.849 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.849 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.850 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.850 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.850 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.850 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.850 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.850 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.850 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.851 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.851 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.851 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.851 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.851 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.851 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.852 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.852 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.852 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.852 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.852 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.852 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.852 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.853 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.853 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.853 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.853 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.854 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.855 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.856 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.856 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.857 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.857 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.857 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.858 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.858 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.859 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.859 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.859 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.860 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.860 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.861 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.861 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.861 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.862 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.862 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.863 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.863 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.863 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.864 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.864 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.864 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.865 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.865 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.866 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.866 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.866 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.867 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.867 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.868 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.868 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.869 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.869 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.870 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.870 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.870 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.871 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.871 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.871 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.872 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.872 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.873 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.873 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.873 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.874 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.874 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.874 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.875 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.875 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.876 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.876 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.876 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.877 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.877 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.877 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.878 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.878 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.878 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.879 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.879 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.879 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.879 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.880 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.880 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.880 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.881 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.881 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.881 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.882 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.882 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.882 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.883 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.883 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.883 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.884 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.884 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.884 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.885 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.885 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.885 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.886 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.886 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.886 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.887 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.887 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.887 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.888 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.888 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.889 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.889 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.889 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.890 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.890 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.891 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.891 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.891 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.892 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.892 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.892 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.893 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.893 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.893 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.894 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.894 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.894 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.895 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.895 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.895 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.896 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.896 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.897 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.897 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.898 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.898 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.898 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.899 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.899 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.899 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.899 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.900 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.900 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.900 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.900 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.901 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.901 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.901 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.901 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.901 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.902 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.902 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.902 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.902 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.902 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.903 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.903 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.903 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.903 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.903 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.904 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.904 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.904 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.904 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.905 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.905 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.905 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.905 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.905 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.906 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.906 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.906 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.906 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.906 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.907 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.907 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.907 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.907 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.907 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.908 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.908 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.909 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.909 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.909 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.909 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.910 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.910 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.910 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.911 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.911 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.911 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.912 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.912 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.912 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.912 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.913 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.913 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.913 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.914 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.914 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.914 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.914 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.915 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.915 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.915 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.915 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.916 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.916 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.916 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.917 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.917 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.917 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.917 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.917 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.918 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.918 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.918 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.919 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.919 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.919 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.920 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.920 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.920 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.920 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.920 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.921 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.921 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.921 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.921 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.922 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.922 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.922 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.922 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.922 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.923 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.923 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.923 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.923 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.923 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.924 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.924 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.924 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.924 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.925 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.925 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.925 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.925 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.926 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.926 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.926 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.926 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.927 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.927 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.927 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.930 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.930 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.931 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.931 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.931 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.931 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.932 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.932 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.932 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.932 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.932 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.933 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.933 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.933 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.933 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.933 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.934 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.934 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.934 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.934 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.934 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.935 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.935 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.935 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.935 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.936 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.936 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.936 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.936 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.937 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.937 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.937 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.937 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.938 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.938 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.938 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.938 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.939 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.939 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.939 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.939 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.940 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.940 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.940 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.940 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.941 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.941 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.941 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.941 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.942 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.942 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.942 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.942 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.943 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.943 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.943 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.943 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.943 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.943 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.944 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.944 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.944 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.944 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.945 189512 WARNING oslo_config.cfg [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 17:22:20 np0005541603 nova_compute[189508]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 17:22:20 np0005541603 nova_compute[189508]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 17:22:20 np0005541603 nova_compute[189508]: and ``live_migration_inbound_addr`` respectively.
Dec  1 17:22:20 np0005541603 nova_compute[189508]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.945 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.945 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.945 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.946 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.946 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.946 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.946 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.946 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.947 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.947 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.947 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.947 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.947 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.947 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.948 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.948 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.948 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.948 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.948 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.948 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.949 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.949 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.949 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.949 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.949 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.950 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.950 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.950 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.950 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.950 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.950 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.951 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.951 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.951 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.951 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.952 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.952 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.952 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.952 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.952 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.952 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.953 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.953 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.953 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.953 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.953 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.953 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.953 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.954 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.954 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.954 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.954 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.954 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.954 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.954 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.955 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.955 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.955 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.955 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.955 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.955 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.956 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.956 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.956 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.956 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.956 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.957 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.957 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.957 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.957 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.957 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.957 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.958 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.958 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.958 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.958 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.958 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.958 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.958 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.959 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.959 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.959 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.959 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.959 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.959 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.960 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.960 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.960 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.960 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.960 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.960 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.961 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.961 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.961 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.961 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.961 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.961 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.961 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.962 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.963 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.963 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.963 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.963 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.963 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.963 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.964 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.965 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.965 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.965 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.965 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.965 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.965 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.965 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.966 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.966 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.966 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.966 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.966 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.966 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.967 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.967 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.967 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.967 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.967 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.967 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.967 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.968 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.968 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.968 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.968 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.968 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.968 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.969 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.969 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.969 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.969 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.969 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.969 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.969 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.970 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.970 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.970 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.970 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.970 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.970 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.971 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.971 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.971 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.971 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.971 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.971 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.971 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.972 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.972 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.972 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.972 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.972 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.972 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.972 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.973 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.973 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.973 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.973 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.973 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.973 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.974 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.974 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.974 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.974 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.974 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.974 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.974 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.975 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.975 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.975 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.975 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.975 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.975 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.976 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.976 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.976 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.976 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.976 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.976 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.977 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.977 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.977 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.977 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.977 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.977 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.977 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.978 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.978 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.978 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.978 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.978 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.978 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.978 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.979 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.979 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.979 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.979 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.979 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.979 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.979 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.980 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.980 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.980 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.980 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.980 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.980 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.981 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.981 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.981 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.981 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.981 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.981 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.981 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.982 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.982 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.982 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.982 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.982 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.982 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.983 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.983 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.983 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.983 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.983 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.983 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.983 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.984 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.984 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.984 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.984 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.984 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.984 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.985 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.985 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.985 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.985 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.985 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.985 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.986 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.986 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.986 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.986 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.986 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.986 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.986 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.987 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.987 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.987 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.987 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.987 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.987 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.987 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.988 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.988 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.988 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.988 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.988 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.988 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.988 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.989 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.989 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.989 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.989 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.989 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.989 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.989 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.990 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.990 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.990 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.990 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.990 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.990 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.990 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.991 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.991 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.991 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.991 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.991 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.991 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.992 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.992 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.992 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.992 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.992 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.992 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.992 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.993 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.993 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.993 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.993 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.993 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.993 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.993 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.994 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.994 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.994 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.994 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.994 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.994 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.994 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.995 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.995 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.995 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.995 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.995 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.995 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.995 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.996 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.996 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.996 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.996 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.996 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.996 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.996 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.997 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.997 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.997 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.997 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.997 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.997 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.997 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.998 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.998 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.998 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.998 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.998 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.998 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.998 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.999 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.999 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.999 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.999 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.999 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:20 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.999 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:20.999 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.000 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.000 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.000 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.000 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.000 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.000 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.000 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.001 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.001 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.001 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.001 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.001 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.001 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.001 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.002 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.002 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.002 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.002 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.002 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.002 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.002 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.003 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.003 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.003 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.003 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.003 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.003 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.003 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.004 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.004 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.004 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.004 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.004 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.004 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.004 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.005 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.005 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.005 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.005 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.005 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.005 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.005 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.006 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.006 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.006 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.006 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.006 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.006 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.007 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.007 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.007 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.007 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.007 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.007 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.007 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.008 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.008 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.008 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.008 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.008 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.008 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.008 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.009 189512 DEBUG oslo_service.service [None req-2b9a4309-2352-4075-a581-fe45da23c61e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.011 189512 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.025 189512 INFO nova.virt.node [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Determined node identity 4ec36104-0fe8-4c15-929c-861f303bb3ec from /var/lib/nova/compute_id#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.026 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.026 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.026 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.027 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.041 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fdec0a6dca0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.044 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fdec0a6dca0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.045 189512 INFO nova.virt.libvirt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.052 189512 INFO nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <host>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <uuid>76dcf733-b3f8-4a52-82fd-91cdbadb534b</uuid>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <arch>x86_64</arch>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model>EPYC-Rome-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <vendor>AMD</vendor>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <microcode version='16777317'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <signature family='23' model='49' stepping='0'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='x2apic'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='tsc-deadline'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='osxsave'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='hypervisor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='tsc_adjust'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='spec-ctrl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='stibp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='arch-capabilities'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='cmp_legacy'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='topoext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='virt-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='lbrv'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='tsc-scale'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='vmcb-clean'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='pause-filter'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='pfthreshold'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='svme-addr-chk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='rdctl-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='mds-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature name='pschange-mc-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <pages unit='KiB' size='4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <pages unit='KiB' size='2048'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <pages unit='KiB' size='1048576'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <power_management>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <suspend_mem/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <suspend_disk/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <suspend_hybrid/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </power_management>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <iommu support='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <migration_features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <live/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <uri_transports>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <uri_transport>tcp</uri_transport>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <uri_transport>rdma</uri_transport>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </uri_transports>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </migration_features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <topology>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <cells num='1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <cell id='0'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          <memory unit='KiB'>7864316</memory>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          <pages unit='KiB' size='4'>1966079</pages>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          <distances>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <sibling id='0' value='10'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          </distances>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          <cpus num='8'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:          </cpus>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        </cell>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </cells>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </topology>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <cache>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </cache>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <secmodel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model>selinux</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <doi>0</doi>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </secmodel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <secmodel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model>dac</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <doi>0</doi>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </secmodel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </host>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <guest>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <os_type>hvm</os_type>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <arch name='i686'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <wordsize>32</wordsize>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <domain type='qemu'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <domain type='kvm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </arch>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <pae/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <nonpae/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <acpi default='on' toggle='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <apic default='on' toggle='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <cpuselection/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <deviceboot/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <disksnapshot default='on' toggle='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <externalSnapshot/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </guest>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <guest>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <os_type>hvm</os_type>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <arch name='x86_64'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <wordsize>64</wordsize>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <domain type='qemu'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <domain type='kvm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </arch>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <acpi default='on' toggle='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <apic default='on' toggle='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <cpuselection/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <deviceboot/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <disksnapshot default='on' toggle='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <externalSnapshot/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </guest>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 
Dec  1 17:22:21 np0005541603 nova_compute[189508]: </capabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: #033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.058 189512 DEBUG nova.virt.libvirt.volume.mount [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.062 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.066 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 17:22:21 np0005541603 nova_compute[189508]: <domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <domain>kvm</domain>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <arch>i686</arch>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <vcpu max='4096'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <iothreads supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <os supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='firmware'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <loader supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>rom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pflash</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='readonly'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>yes</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='secure'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </loader>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </os>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='maximumMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <vendor>AMD</vendor>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='succor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='custom' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-128'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-256'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-512'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <memoryBacking supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='sourceType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>anonymous</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>memfd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </memoryBacking>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <disk supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='diskDevice'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>disk</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cdrom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>floppy</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>lun</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>fdc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>sata</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </disk>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <graphics supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vnc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egl-headless</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </graphics>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <video supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='modelType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vga</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cirrus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>none</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>bochs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ramfb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </video>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hostdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='mode'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>subsystem</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='startupPolicy'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>mandatory</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>requisite</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>optional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='subsysType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pci</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='capsType'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='pciBackend'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hostdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <rng supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>random</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </rng>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <filesystem supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='driverType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>path</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>handle</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtiofs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </filesystem>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <tpm supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-tis</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-crb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emulator</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>external</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendVersion'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>2.0</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </tpm>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <redirdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </redirdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <channel supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </channel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <crypto supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </crypto>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <interface supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>passt</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </interface>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <panic supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>isa</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>hyperv</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </panic>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <console supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>null</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dev</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pipe</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stdio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>udp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tcp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu-vdagent</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </console>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <gic supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <genid supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backup supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <async-teardown supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <ps2 supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sev supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sgx supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hyperv supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='features'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>relaxed</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vapic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>spinlocks</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vpindex</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>runtime</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>synic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stimer</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reset</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vendor_id</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>frequencies</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reenlightenment</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tlbflush</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ipi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>avic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emsr_bitmap</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>xmm_input</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hyperv>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <launchSecurity supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='sectype'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tdx</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </launchSecurity>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: </domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.074 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 17:22:21 np0005541603 nova_compute[189508]: <domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <domain>kvm</domain>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <arch>i686</arch>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <vcpu max='240'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <iothreads supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <os supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='firmware'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <loader supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>rom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pflash</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='readonly'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>yes</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='secure'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </loader>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </os>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='maximumMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <vendor>AMD</vendor>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='succor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='custom' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-128'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-256'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-512'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <memoryBacking supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='sourceType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>anonymous</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>memfd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </memoryBacking>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <disk supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='diskDevice'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>disk</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cdrom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>floppy</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>lun</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ide</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>fdc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>sata</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </disk>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <graphics supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vnc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egl-headless</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </graphics>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <video supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='modelType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vga</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cirrus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>none</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>bochs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ramfb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </video>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hostdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='mode'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>subsystem</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='startupPolicy'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>mandatory</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>requisite</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>optional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='subsysType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pci</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='capsType'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='pciBackend'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hostdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <rng supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>random</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </rng>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <filesystem supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='driverType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>path</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>handle</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtiofs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </filesystem>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <tpm supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-tis</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-crb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emulator</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>external</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendVersion'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>2.0</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </tpm>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <redirdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </redirdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <channel supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </channel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <crypto supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </crypto>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <interface supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>passt</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </interface>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <panic supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>isa</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>hyperv</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </panic>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <console supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>null</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dev</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pipe</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stdio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>udp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tcp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu-vdagent</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </console>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <gic supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <genid supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backup supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <async-teardown supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <ps2 supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sev supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sgx supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hyperv supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='features'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>relaxed</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vapic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>spinlocks</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vpindex</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>runtime</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>synic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stimer</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reset</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vendor_id</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>frequencies</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reenlightenment</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tlbflush</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ipi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>avic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emsr_bitmap</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>xmm_input</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hyperv>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <launchSecurity supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='sectype'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tdx</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </launchSecurity>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: </domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.111 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.115 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 17:22:21 np0005541603 nova_compute[189508]: <domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <domain>kvm</domain>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <arch>x86_64</arch>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <vcpu max='4096'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <iothreads supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <os supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='firmware'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>efi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <loader supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>rom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pflash</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='readonly'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>yes</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='secure'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>yes</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </loader>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </os>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='maximumMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <vendor>AMD</vendor>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='succor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='custom' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-128'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-256'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-512'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <memoryBacking supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='sourceType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>anonymous</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>memfd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </memoryBacking>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <disk supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='diskDevice'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>disk</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cdrom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>floppy</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>lun</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>fdc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>sata</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </disk>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <graphics supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vnc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egl-headless</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </graphics>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <video supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='modelType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vga</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cirrus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>none</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>bochs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ramfb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </video>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hostdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='mode'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>subsystem</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='startupPolicy'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>mandatory</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>requisite</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>optional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='subsysType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pci</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='capsType'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='pciBackend'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hostdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <rng supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>random</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </rng>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <filesystem supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='driverType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>path</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>handle</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtiofs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </filesystem>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <tpm supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-tis</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-crb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emulator</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>external</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendVersion'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>2.0</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </tpm>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <redirdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </redirdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <channel supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </channel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <crypto supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </crypto>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <interface supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>passt</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </interface>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <panic supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>isa</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>hyperv</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </panic>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <console supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>null</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dev</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pipe</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stdio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>udp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tcp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu-vdagent</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </console>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <gic supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <genid supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backup supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <async-teardown supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <ps2 supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sev supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sgx supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hyperv supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='features'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>relaxed</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vapic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>spinlocks</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vpindex</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>runtime</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>synic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stimer</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reset</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vendor_id</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>frequencies</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reenlightenment</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tlbflush</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ipi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>avic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emsr_bitmap</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>xmm_input</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hyperv>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <launchSecurity supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='sectype'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tdx</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </launchSecurity>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: </domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.180 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  1 17:22:21 np0005541603 nova_compute[189508]: <domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <domain>kvm</domain>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <arch>x86_64</arch>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <vcpu max='240'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <iothreads supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <os supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='firmware'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <loader supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>rom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pflash</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='readonly'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>yes</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='secure'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>no</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </loader>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </os>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-passthrough' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='hostPassthroughMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='maximum' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='maximumMigratable'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>on</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>off</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='host-model' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <vendor>AMD</vendor>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='x2apic'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='hypervisor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='stibp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='overflow-recov'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='succor'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lbrv'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='tsc-scale'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='flushbyasid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pause-filter'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='pfthreshold'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <feature policy='disable' name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <mode name='custom' supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Broadwell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Cooperlake-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Denverton-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Dhyana-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='auto-ibrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Milan-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amd-psfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='no-nested-data-bp'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='null-sel-clr-base'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='stibp-always-on'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-Rome-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='EPYC-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='GraniteRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-128'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-256'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx10-512'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='prefetchiti'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Haswell-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v6'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Icelake-Server-v7'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='IvyBridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='KnightsMill-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4fmaps'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-4vnniw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512er'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512pf'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G4-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Opteron_G5-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fma4'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tbm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xop'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SapphireRapids-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='amx-tile'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-bf16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-fp16'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512-vpopcntdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bitalg'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vbmi2'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrc'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fzrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='la57'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='taa-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='tsx-ldtrk'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xfd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='SierraForest-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ifma'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-ne-convert'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx-vnni-int8'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='bus-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cmpccxadd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fbsdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='fsrs'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ibrs-all'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mcdt-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pbrsb-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='psdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='sbdr-ssdp-no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='serialize'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vaes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='vpclmulqdq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Client-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='hle'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='rtm'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Skylake-Server-v5'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512bw'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512cd'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512dq'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512f'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='avx512vl'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='invpcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pcid'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='pku'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='mpx'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v2'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v3'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='core-capability'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='split-lock-detect'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='Snowridge-v4'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='cldemote'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='erms'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='gfni'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdir64b'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='movdiri'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='xsaves'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='athlon-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='core2duo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='coreduo-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='n270-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='ss'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <blockers model='phenom-v1'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnow'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <feature name='3dnowext'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </blockers>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </mode>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </cpu>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <memoryBacking supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <enum name='sourceType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>anonymous</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <value>memfd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </memoryBacking>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <disk supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='diskDevice'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>disk</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cdrom</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>floppy</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>lun</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ide</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>fdc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>sata</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </disk>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <graphics supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vnc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egl-headless</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </graphics>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <video supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='modelType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vga</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>cirrus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>none</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>bochs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ramfb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </video>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hostdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='mode'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>subsystem</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='startupPolicy'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>mandatory</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>requisite</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>optional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='subsysType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pci</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>scsi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='capsType'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='pciBackend'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hostdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <rng supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtio-non-transitional</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>random</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>egd</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </rng>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <filesystem supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='driverType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>path</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>handle</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>virtiofs</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </filesystem>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <tpm supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-tis</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tpm-crb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emulator</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>external</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendVersion'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>2.0</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </tpm>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <redirdev supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='bus'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>usb</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </redirdev>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <channel supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </channel>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <crypto supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendModel'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>builtin</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </crypto>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <interface supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='backendType'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>default</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>passt</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </interface>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <panic supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='model'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>isa</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>hyperv</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </panic>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <console supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='type'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>null</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vc</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pty</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dev</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>file</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>pipe</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stdio</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>udp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tcp</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>unix</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>qemu-vdagent</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>dbus</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </console>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </devices>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  <features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <gic supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <vmcoreinfo supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <genid supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backingStoreInput supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <backup supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <async-teardown supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <ps2 supported='yes'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sev supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <sgx supported='no'/>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <hyperv supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='features'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>relaxed</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vapic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>spinlocks</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vpindex</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>runtime</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>synic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>stimer</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reset</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>vendor_id</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>frequencies</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>reenlightenment</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tlbflush</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>ipi</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>avic</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>emsr_bitmap</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>xmm_input</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <spinlocks>4095</spinlocks>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <stimer_direct>on</stimer_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </defaults>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </hyperv>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    <launchSecurity supported='yes'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      <enum name='sectype'>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:        <value>tdx</value>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:      </enum>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:    </launchSecurity>
Dec  1 17:22:21 np0005541603 nova_compute[189508]:  </features>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: </domainCapabilities>
Dec  1 17:22:21 np0005541603 nova_compute[189508]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.245 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.246 189512 INFO nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Secure Boot support detected#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.249 189512 INFO nova.virt.libvirt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.249 189512 INFO nova.virt.libvirt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.260 189512 DEBUG nova.virt.libvirt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.283 189512 INFO nova.virt.node [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Determined node identity 4ec36104-0fe8-4c15-929c-861f303bb3ec from /var/lib/nova/compute_id#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.305 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Verified node 4ec36104-0fe8-4c15-929c-861f303bb3ec matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.333 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.799 189512 ERROR nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Could not retrieve compute node resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4ec36104-0fe8-4c15-929c-861f303bb3ec' not found: No resource provider with uuid 4ec36104-0fe8-4c15-929c-861f303bb3ec found  ", "request_id": "req-7aab290e-8326-4fa6-b2b3-c1e967af3161"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4ec36104-0fe8-4c15-929c-861f303bb3ec' not found: No resource provider with uuid 4ec36104-0fe8-4c15-929c-861f303bb3ec found  ", "request_id": "req-7aab290e-8326-4fa6-b2b3-c1e967af3161"}]}#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.861 189512 DEBUG oslo_concurrency.lockutils [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.861 189512 DEBUG oslo_concurrency.lockutils [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.861 189512 DEBUG oslo_concurrency.lockutils [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:22:21 np0005541603 nova_compute[189508]: 2025-12-01 22:22:21.862 189512 DEBUG nova.compute.resource_tracker [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.044 189512 WARNING nova.virt.libvirt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.045 189512 DEBUG nova.compute.resource_tracker [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6070MB free_disk=72.4270248413086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.045 189512 DEBUG oslo_concurrency.lockutils [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.046 189512 DEBUG oslo_concurrency.lockutils [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.199 189512 ERROR nova.compute.resource_tracker [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4ec36104-0fe8-4c15-929c-861f303bb3ec' not found: No resource provider with uuid 4ec36104-0fe8-4c15-929c-861f303bb3ec found  ", "request_id": "req-502a9992-ae4a-4d78-96b8-8cab6dd035e3"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4ec36104-0fe8-4c15-929c-861f303bb3ec' not found: No resource provider with uuid 4ec36104-0fe8-4c15-929c-861f303bb3ec found  ", "request_id": "req-502a9992-ae4a-4d78-96b8-8cab6dd035e3"}]}#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.200 189512 DEBUG nova.compute.resource_tracker [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.200 189512 DEBUG nova.compute.resource_tracker [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.678 189512 INFO nova.scheduler.client.report [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [req-a928b661-4f29-4b29-b5ad-8a4516eefa0e] Created resource provider record via placement API for resource provider with UUID 4ec36104-0fe8-4c15-929c-861f303bb3ec and name compute-0.ctlplane.example.com.#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.705 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  1 17:22:22 np0005541603 nova_compute[189508]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.705 189512 INFO nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.706 189512 DEBUG nova.compute.provider_tree [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.706 189512 DEBUG nova.virt.libvirt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.751 189512 DEBUG nova.scheduler.client.report [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Updated inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.751 189512 DEBUG nova.compute.provider_tree [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Updating resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.752 189512 DEBUG nova.compute.provider_tree [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.897 189512 DEBUG nova.compute.provider_tree [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Updating resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.920 189512 DEBUG nova.compute.resource_tracker [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.921 189512 DEBUG oslo_concurrency.lockutils [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:22:22 np0005541603 nova_compute[189508]: 2025-12-01 22:22:22.921 189512 DEBUG nova.service [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  1 17:22:23 np0005541603 nova_compute[189508]: 2025-12-01 22:22:23.014 189512 DEBUG nova.service [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  1 17:22:23 np0005541603 nova_compute[189508]: 2025-12-01 22:22:23.015 189512 DEBUG nova.servicegroup.drivers.db [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  1 17:22:25 np0005541603 systemd-logind[788]: New session 25 of user zuul.
Dec  1 17:22:25 np0005541603 systemd[1]: Started Session 25 of User zuul.
Dec  1 17:22:26 np0005541603 python3.9[189965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 17:22:28 np0005541603 python3.9[190121]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:22:28 np0005541603 systemd[1]: Reloading.
Dec  1 17:22:28 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:22:28 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:22:29 np0005541603 python3.9[190308]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:22:29 np0005541603 network[190325]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:22:29 np0005541603 network[190326]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:22:29 np0005541603 network[190327]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:22:31 np0005541603 podman[190372]: 2025-12-01 22:22:31.598834778 +0000 UTC m=+0.096325169 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 17:22:34 np0005541603 python3.9[190621]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:22:36 np0005541603 python3.9[190774]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:36 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:22:36 np0005541603 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 17:22:37 np0005541603 python3.9[190927]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:38 np0005541603 python3.9[191081]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:22:38 np0005541603 podman[191160]: 2025-12-01 22:22:38.85311485 +0000 UTC m=+0.129109114 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 17:22:39 np0005541603 python3.9[191256]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:22:40 np0005541603 python3.9[191408]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:22:40 np0005541603 systemd[1]: Reloading.
Dec  1 17:22:40 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:22:40 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:22:41 np0005541603 podman[191566]: 2025-12-01 22:22:41.306123387 +0000 UTC m=+0.106107888 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:22:41 np0005541603 python3.9[191611]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:22:42 np0005541603 python3.9[191764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:22:43 np0005541603 python3.9[191914]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:44 np0005541603 python3.9[192066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:22:45 np0005541603 python3.9[192187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627763.6109517-133-120625029621468/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:22:46 np0005541603 python3.9[192339]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  1 17:22:47 np0005541603 python3.9[192491]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  1 17:22:48 np0005541603 python3.9[192644]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 17:22:49 np0005541603 python3.9[192802]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 17:22:51 np0005541603 python3.9[192960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:22:51 np0005541603 python3.9[193081]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764627770.5046983-201-48725688499449/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:52 np0005541603 python3.9[193231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:22:53 np0005541603 python3.9[193352]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764627771.7978776-201-83647391311703/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:53 np0005541603 python3.9[193502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:22:54 np0005541603 python3.9[193623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764627773.2657344-201-80721809285936/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:55 np0005541603 python3.9[193773]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:56 np0005541603 python3.9[193925]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:22:57 np0005541603 python3.9[194077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:22:57 np0005541603 python3.9[194198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627776.4401054-260-166731485307679/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:58 np0005541603 python3.9[194348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:22:58 np0005541603 python3.9[194424]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:22:59 np0005541603 python3.9[194574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:00 np0005541603 python3.9[194695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627779.1004996-260-54397087933709/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:01 np0005541603 python3.9[194845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:01 np0005541603 podman[194967]: 2025-12-01 22:23:01.840519173 +0000 UTC m=+0.112559315 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  1 17:23:01 np0005541603 python3.9[194966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627780.6240418-260-122047377118291/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:02 np0005541603 python3.9[195138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:03 np0005541603 python3.9[195261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627782.0986362-260-178349484760396/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:04 np0005541603 python3.9[195411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:23:04.594 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:23:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:23:04.595 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:23:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:23:04.595 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:23:04 np0005541603 python3.9[195532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627783.4797964-260-47034479916401/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:05 np0005541603 python3.9[195682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:05 np0005541603 python3.9[195803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627784.9046464-260-264952858081206/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:06 np0005541603 python3.9[195953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:07 np0005541603 python3.9[196076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627786.1178617-260-123289126499319/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:08 np0005541603 python3.9[196226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:08 np0005541603 python3.9[196347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627787.5522876-260-241158050140317/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:09 np0005541603 nova_compute[189508]: 2025-12-01 22:23:09.016 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:09 np0005541603 nova_compute[189508]: 2025-12-01 22:23:09.049 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:09 np0005541603 podman[196471]: 2025-12-01 22:23:09.58071112 +0000 UTC m=+0.145868373 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 17:23:09 np0005541603 python3.9[196510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:10 np0005541603 python3.9[196644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627789.0191586-260-248800581483450/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:11 np0005541603 python3.9[196794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:11 np0005541603 podman[196889]: 2025-12-01 22:23:11.595507142 +0000 UTC m=+0.077906219 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  1 17:23:11 np0005541603 python3.9[196926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627790.5556293-260-103825629212330/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:12 np0005541603 python3.9[197084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:13 np0005541603 python3.9[197160]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:14 np0005541603 python3.9[197310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:14 np0005541603 python3.9[197386]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:15 np0005541603 python3.9[197536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:15 np0005541603 python3.9[197612]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:16 np0005541603 python3.9[197764]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:17 np0005541603 python3.9[197916]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:18 np0005541603 python3.9[198068]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:23:19 np0005541603 python3.9[198220]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:23:19 np0005541603 systemd[1]: Reloading.
Dec  1 17:23:19 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:23:19 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:23:19 np0005541603 systemd[1]: Listening on Podman API Socket.
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.203 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.203 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.203 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.231 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.231 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.232 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.233 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.233 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.233 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.234 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.234 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.235 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.275 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.276 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.276 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.276 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.530 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.531 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5998MB free_disk=72.42706298828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.531 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.532 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.633 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.633 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.683 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.705 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.707 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 17:23:20 np0005541603 nova_compute[189508]: 2025-12-01 22:23:20.707 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:23:20 np0005541603 python3.9[198413]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:20 np0005541603 auditd[704]: Audit daemon rotating log files
Dec  1 17:23:21 np0005541603 python3.9[198536]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627800.067613-482-146078730848953/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:23:21 np0005541603 python3.9[198612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:22 np0005541603 python3.9[198735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627800.067613-482-146078730848953/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:23:23 np0005541603 python3.9[198887]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  1 17:23:24 np0005541603 python3.9[199039]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:23:26 np0005541603 python3[199191]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:23:26 np0005541603 podman[199229]: 2025-12-01 22:23:26.558430997 +0000 UTC m=+0.057525924 container create f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  1 17:23:26 np0005541603 podman[199229]: 2025-12-01 22:23:26.527950071 +0000 UTC m=+0.027044978 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  1 17:23:26 np0005541603 python3[199191]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  1 17:23:27 np0005541603 python3.9[199418]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:23:28 np0005541603 python3.9[199572]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:29 np0005541603 python3.9[199723]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627808.5590668-546-71112375817504/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:30 np0005541603 python3.9[199799]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:23:30 np0005541603 systemd[1]: Reloading.
Dec  1 17:23:30 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:23:30 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:23:31 np0005541603 python3.9[199910]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:23:31 np0005541603 systemd[1]: Reloading.
Dec  1 17:23:31 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:23:31 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:23:31 np0005541603 systemd[1]: Starting ceilometer_agent_compute container...
Dec  1 17:23:31 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:23:31 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:31 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:31 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:31 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:32 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.
Dec  1 17:23:32 np0005541603 podman[199950]: 2025-12-01 22:23:32.169780703 +0000 UTC m=+0.594546795 container init f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + sudo -E kolla_set_configs
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: sudo: unable to send audit message: Operation not permitted
Dec  1 17:23:32 np0005541603 podman[199950]: 2025-12-01 22:23:32.216774944 +0000 UTC m=+0.641540976 container start f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  1 17:23:32 np0005541603 podman[199950]: ceilometer_agent_compute
Dec  1 17:23:32 np0005541603 systemd[1]: Started ceilometer_agent_compute container.
Dec  1 17:23:32 np0005541603 podman[199968]: 2025-12-01 22:23:32.243813261 +0000 UTC m=+0.088572216 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Validating config file
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Copying service configuration files
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: INFO:__main__:Writing out command to execute
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: ++ cat /run_command
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + ARGS=
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + sudo kolla_copy_cacerts
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: sudo: unable to send audit message: Operation not permitted
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + [[ ! -n '' ]]
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + . kolla_extend_start
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + umask 0022
Dec  1 17:23:32 np0005541603 ceilometer_agent_compute[199965]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  1 17:23:32 np0005541603 podman[199985]: 2025-12-01 22:23:32.314627665 +0000 UTC m=+0.079874886 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  1 17:23:32 np0005541603 systemd[1]: f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-648d8a5abdacfda7.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:23:32 np0005541603 systemd[1]: f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-648d8a5abdacfda7.service: Failed with result 'exit-code'.
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.080 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.080 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.081 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.082 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.083 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.084 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.085 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.086 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.087 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.088 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.089 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.090 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.091 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.092 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.093 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.094 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.095 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.096 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.096 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.096 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.096 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.096 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.096 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.096 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.115 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.116 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.116 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.116 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.117 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.118 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.119 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.120 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.121 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.122 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.123 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.124 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.125 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.129 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.131 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.133 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.133 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  1 17:23:33 np0005541603 python3.9[200163]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:23:33 np0005541603 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.328 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.356 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.364 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.364 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.364 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.429 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.430 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.430 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.481 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.481 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.481 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.481 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.482 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.483 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.483 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.483 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.483 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.483 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.483 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.484 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.485 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.486 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.487 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.488 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.489 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.490 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.491 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.491 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.491 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.491 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.491 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.491 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.491 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.492 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.493 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.494 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.495 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.496 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.497 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.498 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.499 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.500 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.500 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  1 17:23:33 np0005541603 ceilometer_agent_compute[199965]: 2025-12-01 22:23:33.511 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  1 17:23:33 np0005541603 virtqemud[189130]: End of file while reading data: Input/output error
Dec  1 17:23:33 np0005541603 systemd[1]: libpod-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope: Deactivated successfully.
Dec  1 17:23:33 np0005541603 conmon[199965]: conmon f192dad1d7d3945ce21d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope/container/memory.events
Dec  1 17:23:33 np0005541603 systemd[1]: libpod-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope: Consumed 1.501s CPU time.
Dec  1 17:23:33 np0005541603 podman[200175]: 2025-12-01 22:23:33.678772172 +0000 UTC m=+0.400833079 container died f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 17:23:33 np0005541603 systemd[1]: f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-648d8a5abdacfda7.timer: Deactivated successfully.
Dec  1 17:23:33 np0005541603 systemd[1]: Stopped /usr/bin/podman healthcheck run f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.
Dec  1 17:23:33 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-userdata-shm.mount: Deactivated successfully.
Dec  1 17:23:33 np0005541603 systemd[1]: var-lib-containers-storage-overlay-eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2-merged.mount: Deactivated successfully.
Dec  1 17:23:33 np0005541603 podman[200175]: 2025-12-01 22:23:33.731093025 +0000 UTC m=+0.453153902 container cleanup f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Dec  1 17:23:33 np0005541603 podman[200175]: ceilometer_agent_compute
Dec  1 17:23:33 np0005541603 podman[200209]: ceilometer_agent_compute
Dec  1 17:23:33 np0005541603 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  1 17:23:33 np0005541603 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  1 17:23:33 np0005541603 systemd[1]: Starting ceilometer_agent_compute container...
Dec  1 17:23:33 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:23:33 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:33 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:33 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:33 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb7dac62c1ecd29582889376396dcbac7ede6ff8f466ba33ebcc02b8a0078c2/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:34 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.
Dec  1 17:23:34 np0005541603 podman[200222]: 2025-12-01 22:23:34.010913386 +0000 UTC m=+0.160761050 container init f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + sudo -E kolla_set_configs
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: sudo: unable to send audit message: Operation not permitted
Dec  1 17:23:34 np0005541603 podman[200222]: 2025-12-01 22:23:34.050748121 +0000 UTC m=+0.200595725 container start f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 17:23:34 np0005541603 podman[200222]: ceilometer_agent_compute
Dec  1 17:23:34 np0005541603 systemd[1]: Started ceilometer_agent_compute container.
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Validating config file
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Copying service configuration files
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: INFO:__main__:Writing out command to execute
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: ++ cat /run_command
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + ARGS=
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + sudo kolla_copy_cacerts
Dec  1 17:23:34 np0005541603 podman[200244]: 2025-12-01 22:23:34.147420468 +0000 UTC m=+0.076830278 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 17:23:34 np0005541603 systemd[1]: f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-798195cca5294e70.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:23:34 np0005541603 systemd[1]: f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-798195cca5294e70.service: Failed with result 'exit-code'.
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: sudo: unable to send audit message: Operation not permitted
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + [[ ! -n '' ]]
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + . kolla_extend_start
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + umask 0022
Dec  1 17:23:34 np0005541603 ceilometer_agent_compute[200237]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  1 17:23:34 np0005541603 python3.9[200418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.006 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.007 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.008 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.009 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.010 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.011 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.012 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.013 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.014 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.015 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.016 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.017 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.018 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.041 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.042 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.042 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.042 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.042 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.043 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.043 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.043 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.043 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.043 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.043 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.043 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.044 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.045 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.046 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.047 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.048 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.049 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.050 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.051 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.051 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.051 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.051 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.051 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.051 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.051 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.052 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.052 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.052 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.052 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.052 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.052 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.052 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.053 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.054 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.054 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.054 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.054 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.054 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.054 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.054 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.055 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.056 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.057 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.058 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.059 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.060 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.061 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.061 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.061 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.061 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.061 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.061 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.064 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.066 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.068 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.069 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.077 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.079 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.079 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.215 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.216 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.217 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.218 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.219 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.220 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.221 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.222 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.223 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.224 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.225 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.226 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.227 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.228 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.229 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.230 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.230 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.230 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.230 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.230 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.230 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.234 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.260 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.261 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.264 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.275 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b276b0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.281 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.282 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:23:35.283 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:23:35 np0005541603 python3.9[200554]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627814.2961261-578-214883301933383/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:23:36 np0005541603 python3.9[200706]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  1 17:23:37 np0005541603 python3.9[200858]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:23:38 np0005541603 python3[201010]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:23:38 np0005541603 podman[201046]: 2025-12-01 22:23:38.623676729 +0000 UTC m=+0.058083420 container create 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Dec  1 17:23:38 np0005541603 podman[201046]: 2025-12-01 22:23:38.595543641 +0000 UTC m=+0.029950362 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  1 17:23:38 np0005541603 python3[201010]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  1 17:23:39 np0005541603 python3.9[201236]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:23:39 np0005541603 podman[201237]: 2025-12-01 22:23:39.85712698 +0000 UTC m=+0.130451628 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 17:23:40 np0005541603 python3.9[201416]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:41 np0005541603 python3.9[201567]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627820.8568046-631-121620651597104/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:41 np0005541603 podman[201588]: 2025-12-01 22:23:41.812586429 +0000 UTC m=+0.088753861 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 17:23:42 np0005541603 python3.9[201662]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:23:42 np0005541603 systemd[1]: Reloading.
Dec  1 17:23:42 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:23:42 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:23:43 np0005541603 python3.9[201774]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:23:44 np0005541603 systemd[1]: Reloading.
Dec  1 17:23:44 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:23:44 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:23:44 np0005541603 systemd[1]: Starting node_exporter container...
Dec  1 17:23:44 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:23:44 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7fd0c118e41d55d6dbd98fcf21ebab5501bc219ece6bc368cbfbf27e14ea8f/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:44 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7fd0c118e41d55d6dbd98fcf21ebab5501bc219ece6bc368cbfbf27e14ea8f/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:44 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.
Dec  1 17:23:44 np0005541603 podman[201814]: 2025-12-01 22:23:44.892883428 +0000 UTC m=+0.137086940 container init 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.916Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.916Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.916Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.918Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.918Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.918Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.918Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.919Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=systemd
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.920Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.921Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  1 17:23:44 np0005541603 node_exporter[201830]: ts=2025-12-01T22:23:44.922Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  1 17:23:44 np0005541603 podman[201814]: 2025-12-01 22:23:44.923676553 +0000 UTC m=+0.167879995 container start 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:23:44 np0005541603 podman[201814]: node_exporter
Dec  1 17:23:44 np0005541603 systemd[1]: Started node_exporter container.
Dec  1 17:23:45 np0005541603 podman[201840]: 2025-12-01 22:23:45.019922308 +0000 UTC m=+0.077296602 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 17:23:45 np0005541603 python3.9[202015]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:23:45 np0005541603 systemd[1]: Stopping node_exporter container...
Dec  1 17:23:46 np0005541603 systemd[1]: libpod-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope: Deactivated successfully.
Dec  1 17:23:46 np0005541603 podman[202019]: 2025-12-01 22:23:46.05010148 +0000 UTC m=+0.057904305 container died 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 17:23:46 np0005541603 systemd[1]: 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d-274964ddf5509225.timer: Deactivated successfully.
Dec  1 17:23:46 np0005541603 systemd[1]: Stopped /usr/bin/podman healthcheck run 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.
Dec  1 17:23:46 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d-userdata-shm.mount: Deactivated successfully.
Dec  1 17:23:46 np0005541603 systemd[1]: var-lib-containers-storage-overlay-6b7fd0c118e41d55d6dbd98fcf21ebab5501bc219ece6bc368cbfbf27e14ea8f-merged.mount: Deactivated successfully.
Dec  1 17:23:46 np0005541603 podman[202019]: 2025-12-01 22:23:46.101490926 +0000 UTC m=+0.109293771 container cleanup 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:23:46 np0005541603 podman[202019]: node_exporter
Dec  1 17:23:46 np0005541603 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 17:23:46 np0005541603 podman[202047]: node_exporter
Dec  1 17:23:46 np0005541603 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  1 17:23:46 np0005541603 systemd[1]: Stopped node_exporter container.
Dec  1 17:23:46 np0005541603 systemd[1]: Starting node_exporter container...
Dec  1 17:23:46 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:23:46 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7fd0c118e41d55d6dbd98fcf21ebab5501bc219ece6bc368cbfbf27e14ea8f/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:46 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7fd0c118e41d55d6dbd98fcf21ebab5501bc219ece6bc368cbfbf27e14ea8f/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:46 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.
Dec  1 17:23:46 np0005541603 podman[202059]: 2025-12-01 22:23:46.380866854 +0000 UTC m=+0.135273138 container init 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.396Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.396Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.396Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.397Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.397Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.397Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.398Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.398Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=systemd
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.399Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.400Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  1 17:23:46 np0005541603 node_exporter[202075]: ts=2025-12-01T22:23:46.401Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  1 17:23:46 np0005541603 podman[202059]: 2025-12-01 22:23:46.413521492 +0000 UTC m=+0.167927706 container start 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 17:23:46 np0005541603 podman[202059]: node_exporter
Dec  1 17:23:46 np0005541603 systemd[1]: Started node_exporter container.
Dec  1 17:23:46 np0005541603 podman[202084]: 2025-12-01 22:23:46.520591399 +0000 UTC m=+0.087611819 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:23:47 np0005541603 python3.9[202259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:23:47 np0005541603 python3.9[202382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627826.6637204-663-153529460166770/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:23:48 np0005541603 python3.9[202534]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  1 17:23:49 np0005541603 python3.9[202687]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:23:50 np0005541603 python3[202839]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:23:52 np0005541603 podman[202851]: 2025-12-01 22:23:52.742836988 +0000 UTC m=+1.651038242 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  1 17:23:52 np0005541603 podman[202948]: 2025-12-01 22:23:52.92526946 +0000 UTC m=+0.070621750 container create 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter)
Dec  1 17:23:52 np0005541603 podman[202948]: 2025-12-01 22:23:52.895808994 +0000 UTC m=+0.041161284 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  1 17:23:52 np0005541603 python3[202839]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  1 17:23:53 np0005541603 python3.9[203135]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:23:54 np0005541603 python3.9[203289]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:55 np0005541603 python3.9[203440]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627834.9537678-716-82543396343310/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:23:56 np0005541603 python3.9[203516]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:23:56 np0005541603 systemd[1]: Reloading.
Dec  1 17:23:56 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:23:56 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:23:57 np0005541603 python3.9[203626]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:23:57 np0005541603 systemd[1]: Reloading.
Dec  1 17:23:57 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:23:57 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:23:57 np0005541603 systemd[1]: Starting podman_exporter container...
Dec  1 17:23:57 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:23:57 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638bcf69f93378d616884526a54a2271c230de8bad58d7adfd0a53c85103ede9/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:57 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638bcf69f93378d616884526a54a2271c230de8bad58d7adfd0a53c85103ede9/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:57 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.
Dec  1 17:23:57 np0005541603 podman[203667]: 2025-12-01 22:23:57.875935212 +0000 UTC m=+0.171021775 container init 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:23:57 np0005541603 podman_exporter[203682]: ts=2025-12-01T22:23:57.905Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  1 17:23:57 np0005541603 podman_exporter[203682]: ts=2025-12-01T22:23:57.905Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  1 17:23:57 np0005541603 podman_exporter[203682]: ts=2025-12-01T22:23:57.905Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  1 17:23:57 np0005541603 podman_exporter[203682]: ts=2025-12-01T22:23:57.905Z caller=handler.go:105 level=info collector=container
Dec  1 17:23:57 np0005541603 podman[203667]: 2025-12-01 22:23:57.919869594 +0000 UTC m=+0.214956177 container start 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:23:57 np0005541603 podman[203667]: podman_exporter
Dec  1 17:23:57 np0005541603 systemd[1]: Starting Podman API Service...
Dec  1 17:23:57 np0005541603 systemd[1]: Started podman_exporter container.
Dec  1 17:23:57 np0005541603 systemd[1]: Started Podman API Service.
Dec  1 17:23:57 np0005541603 podman[203693]: time="2025-12-01T22:23:57Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  1 17:23:57 np0005541603 podman[203693]: time="2025-12-01T22:23:57Z" level=info msg="Setting parallel job count to 25"
Dec  1 17:23:57 np0005541603 podman[203693]: time="2025-12-01T22:23:57Z" level=info msg="Using sqlite as database backend"
Dec  1 17:23:57 np0005541603 podman[203693]: time="2025-12-01T22:23:57Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  1 17:23:57 np0005541603 podman[203693]: time="2025-12-01T22:23:57Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  1 17:23:57 np0005541603 podman[203693]: time="2025-12-01T22:23:57Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  1 17:23:57 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:23:57 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  1 17:23:58 np0005541603 podman[203693]: time="2025-12-01T22:23:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 17:23:58 np0005541603 podman[203691]: 2025-12-01 22:23:58.013510635 +0000 UTC m=+0.075193382 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 17:23:58 np0005541603 systemd[1]: 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1-619204eed30ea36.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:23:58 np0005541603 systemd[1]: 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1-619204eed30ea36.service: Failed with result 'exit-code'.
Dec  1 17:23:58 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:23:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19588 "" "Go-http-client/1.1"
Dec  1 17:23:58 np0005541603 podman_exporter[203682]: ts=2025-12-01T22:23:58.026Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  1 17:23:58 np0005541603 podman_exporter[203682]: ts=2025-12-01T22:23:58.026Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  1 17:23:58 np0005541603 podman_exporter[203682]: ts=2025-12-01T22:23:58.030Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  1 17:23:58 np0005541603 python3.9[203882]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:23:58 np0005541603 systemd[1]: Stopping podman_exporter container...
Dec  1 17:23:59 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:23:58 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  1 17:23:59 np0005541603 systemd[1]: libpod-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope: Deactivated successfully.
Dec  1 17:23:59 np0005541603 podman[203886]: 2025-12-01 22:23:59.01282169 +0000 UTC m=+0.055498136 container died 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:23:59 np0005541603 systemd[1]: 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1-619204eed30ea36.timer: Deactivated successfully.
Dec  1 17:23:59 np0005541603 systemd[1]: Stopped /usr/bin/podman healthcheck run 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.
Dec  1 17:23:59 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1-userdata-shm.mount: Deactivated successfully.
Dec  1 17:23:59 np0005541603 systemd[1]: var-lib-containers-storage-overlay-638bcf69f93378d616884526a54a2271c230de8bad58d7adfd0a53c85103ede9-merged.mount: Deactivated successfully.
Dec  1 17:23:59 np0005541603 podman[203886]: 2025-12-01 22:23:59.319226674 +0000 UTC m=+0.361903100 container cleanup 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 17:23:59 np0005541603 podman[203886]: podman_exporter
Dec  1 17:23:59 np0005541603 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 17:23:59 np0005541603 podman[203913]: podman_exporter
Dec  1 17:23:59 np0005541603 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  1 17:23:59 np0005541603 systemd[1]: Stopped podman_exporter container.
Dec  1 17:23:59 np0005541603 systemd[1]: Starting podman_exporter container...
Dec  1 17:23:59 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:23:59 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638bcf69f93378d616884526a54a2271c230de8bad58d7adfd0a53c85103ede9/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:59 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638bcf69f93378d616884526a54a2271c230de8bad58d7adfd0a53c85103ede9/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:23:59 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.
Dec  1 17:23:59 np0005541603 podman[203926]: 2025-12-01 22:23:59.593477065 +0000 UTC m=+0.150965639 container init 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:23:59 np0005541603 podman_exporter[203941]: ts=2025-12-01T22:23:59.614Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  1 17:23:59 np0005541603 podman_exporter[203941]: ts=2025-12-01T22:23:59.614Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  1 17:23:59 np0005541603 podman_exporter[203941]: ts=2025-12-01T22:23:59.614Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  1 17:23:59 np0005541603 podman_exporter[203941]: ts=2025-12-01T22:23:59.614Z caller=handler.go:105 level=info collector=container
Dec  1 17:23:59 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:23:59 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  1 17:23:59 np0005541603 podman[203693]: time="2025-12-01T22:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 17:23:59 np0005541603 podman[203926]: 2025-12-01 22:23:59.636976425 +0000 UTC m=+0.194464949 container start 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 17:23:59 np0005541603 podman[203926]: podman_exporter
Dec  1 17:23:59 np0005541603 systemd[1]: Started podman_exporter container.
Dec  1 17:23:59 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19590 "" "Go-http-client/1.1"
Dec  1 17:23:59 np0005541603 podman_exporter[203941]: ts=2025-12-01T22:23:59.651Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  1 17:23:59 np0005541603 podman_exporter[203941]: ts=2025-12-01T22:23:59.651Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  1 17:23:59 np0005541603 podman_exporter[203941]: ts=2025-12-01T22:23:59.652Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  1 17:23:59 np0005541603 podman[203951]: 2025-12-01 22:23:59.775684651 +0000 UTC m=+0.117363004 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 17:24:00 np0005541603 python3.9[204127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:01 np0005541603 python3.9[204250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627839.945584-748-210523738224363/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:24:02 np0005541603 podman[204402]: 2025-12-01 22:24:02.429515167 +0000 UTC m=+0.101190752 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:24:02 np0005541603 python3.9[204403]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  1 17:24:03 np0005541603 python3.9[204574]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:24:04 np0005541603 podman[204726]: 2025-12-01 22:24:04.345918998 +0000 UTC m=+0.084572325 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 17:24:04 np0005541603 systemd[1]: f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-798195cca5294e70.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:24:04 np0005541603 systemd[1]: f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe-798195cca5294e70.service: Failed with result 'exit-code'.
Dec  1 17:24:04 np0005541603 python3[204727]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:24:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:24:04.594 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:24:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:24:04.594 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:24:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:24:04.594 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:24:07 np0005541603 podman[204758]: 2025-12-01 22:24:07.033952558 +0000 UTC m=+2.418960160 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 17:24:07 np0005541603 podman[204856]: 2025-12-01 22:24:07.22654746 +0000 UTC m=+0.071171039 container create 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, version=9.6)
Dec  1 17:24:07 np0005541603 podman[204856]: 2025-12-01 22:24:07.193996563 +0000 UTC m=+0.038620192 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 17:24:07 np0005541603 python3[204727]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 17:24:08 np0005541603 python3.9[205047]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:24:09 np0005541603 python3.9[205201]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:10 np0005541603 python3.9[205352]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627849.2802422-801-68359956341082/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:10 np0005541603 podman[205400]: 2025-12-01 22:24:10.467800778 +0000 UTC m=+0.138593089 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 17:24:10 np0005541603 python3.9[205447]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:24:10 np0005541603 systemd[1]: Reloading.
Dec  1 17:24:10 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:24:10 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:24:11 np0005541603 python3.9[205566]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:24:11 np0005541603 systemd[1]: Reloading.
Dec  1 17:24:11 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:24:11 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:24:11 np0005541603 podman[205570]: 2025-12-01 22:24:11.972240373 +0000 UTC m=+0.107071691 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 17:24:12 np0005541603 systemd[1]: Starting openstack_network_exporter container...
Dec  1 17:24:12 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:24:12 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47241ee35183cc8e6c16ed59dfeb1cde124294c9712ff4a3cb20385f0528f33d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 17:24:12 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47241ee35183cc8e6c16ed59dfeb1cde124294c9712ff4a3cb20385f0528f33d/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:24:12 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47241ee35183cc8e6c16ed59dfeb1cde124294c9712ff4a3cb20385f0528f33d/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:24:12 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.
Dec  1 17:24:12 np0005541603 podman[205625]: 2025-12-01 22:24:12.349969221 +0000 UTC m=+0.155126334 container init 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *bridge.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *coverage.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *datapath.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *iface.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *memory.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *ovnnorthd.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *ovn.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *ovsdbserver.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *pmd_perf.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *pmd_rxq.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: INFO    22:24:12 main.go:48: registering *vswitch.Collector
Dec  1 17:24:12 np0005541603 openstack_network_exporter[205642]: NOTICE  22:24:12 main.go:76: listening on https://:9105/metrics
Dec  1 17:24:12 np0005541603 podman[205625]: 2025-12-01 22:24:12.390507558 +0000 UTC m=+0.195664621 container start 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Dec  1 17:24:12 np0005541603 podman[205625]: openstack_network_exporter
Dec  1 17:24:12 np0005541603 systemd[1]: Started openstack_network_exporter container.
Dec  1 17:24:12 np0005541603 podman[205652]: 2025-12-01 22:24:12.522240588 +0000 UTC m=+0.112715214 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 17:24:13 np0005541603 python3.9[205826]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:24:13 np0005541603 systemd[1]: Stopping openstack_network_exporter container...
Dec  1 17:24:13 np0005541603 systemd[1]: libpod-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope: Deactivated successfully.
Dec  1 17:24:13 np0005541603 podman[205830]: 2025-12-01 22:24:13.547170588 +0000 UTC m=+0.070735506 container died 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec  1 17:24:13 np0005541603 systemd[1]: 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74-53873e03afba935a.timer: Deactivated successfully.
Dec  1 17:24:13 np0005541603 systemd[1]: Stopped /usr/bin/podman healthcheck run 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.
Dec  1 17:24:13 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74-userdata-shm.mount: Deactivated successfully.
Dec  1 17:24:13 np0005541603 systemd[1]: var-lib-containers-storage-overlay-47241ee35183cc8e6c16ed59dfeb1cde124294c9712ff4a3cb20385f0528f33d-merged.mount: Deactivated successfully.
Dec  1 17:24:14 np0005541603 podman[205830]: 2025-12-01 22:24:14.520176964 +0000 UTC m=+1.043741882 container cleanup 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41)
Dec  1 17:24:14 np0005541603 podman[205830]: openstack_network_exporter
Dec  1 17:24:14 np0005541603 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 17:24:14 np0005541603 podman[205858]: openstack_network_exporter
Dec  1 17:24:14 np0005541603 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  1 17:24:14 np0005541603 systemd[1]: Stopped openstack_network_exporter container.
Dec  1 17:24:14 np0005541603 systemd[1]: Starting openstack_network_exporter container...
Dec  1 17:24:14 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:24:14 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47241ee35183cc8e6c16ed59dfeb1cde124294c9712ff4a3cb20385f0528f33d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 17:24:14 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47241ee35183cc8e6c16ed59dfeb1cde124294c9712ff4a3cb20385f0528f33d/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:24:14 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47241ee35183cc8e6c16ed59dfeb1cde124294c9712ff4a3cb20385f0528f33d/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:24:14 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.
Dec  1 17:24:14 np0005541603 podman[205871]: 2025-12-01 22:24:14.846248346 +0000 UTC m=+0.177374135 container init 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *bridge.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *coverage.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *datapath.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *iface.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *memory.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *ovnnorthd.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *ovn.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *ovsdbserver.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *pmd_perf.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *pmd_rxq.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: INFO    22:24:14 main.go:48: registering *vswitch.Collector
Dec  1 17:24:14 np0005541603 openstack_network_exporter[205887]: NOTICE  22:24:14 main.go:76: listening on https://:9105/metrics
Dec  1 17:24:14 np0005541603 podman[205871]: 2025-12-01 22:24:14.884381013 +0000 UTC m=+0.215506762 container start 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 17:24:14 np0005541603 podman[205871]: openstack_network_exporter
Dec  1 17:24:14 np0005541603 systemd[1]: Started openstack_network_exporter container.
Dec  1 17:24:14 np0005541603 podman[205897]: 2025-12-01 22:24:14.979960163 +0000 UTC m=+0.078247452 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec  1 17:24:15 np0005541603 python3.9[206069]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:24:16 np0005541603 podman[206193]: 2025-12-01 22:24:16.697901762 +0000 UTC m=+0.066813944 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 17:24:16 np0005541603 python3.9[206245]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  1 17:24:17 np0005541603 python3.9[206410]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:18 np0005541603 systemd[1]: Started libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope.
Dec  1 17:24:18 np0005541603 podman[206411]: 2025-12-01 22:24:18.105907814 +0000 UTC m=+0.098343711 container exec 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  1 17:24:18 np0005541603 podman[206411]: 2025-12-01 22:24:18.142735413 +0000 UTC m=+0.135171210 container exec_died 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 17:24:18 np0005541603 systemd[1]: libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope: Deactivated successfully.
Dec  1 17:24:19 np0005541603 python3.9[206592]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:19 np0005541603 systemd[1]: Started libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope.
Dec  1 17:24:19 np0005541603 podman[206593]: 2025-12-01 22:24:19.207755916 +0000 UTC m=+0.105758403 container exec 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 17:24:19 np0005541603 podman[206593]: 2025-12-01 22:24:19.243773202 +0000 UTC m=+0.141775689 container exec_died 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 17:24:19 np0005541603 systemd[1]: libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope: Deactivated successfully.
Dec  1 17:24:20 np0005541603 python3.9[206775]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:20 np0005541603 nova_compute[189508]: 2025-12-01 22:24:20.699 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:20 np0005541603 nova_compute[189508]: 2025-12-01 22:24:20.722 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:20 np0005541603 nova_compute[189508]: 2025-12-01 22:24:20.722 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:20 np0005541603 nova_compute[189508]: 2025-12-01 22:24:20.722 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:20 np0005541603 nova_compute[189508]: 2025-12-01 22:24:20.722 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 17:24:20 np0005541603 python3.9[206927]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  1 17:24:21 np0005541603 nova_compute[189508]: 2025-12-01 22:24:21.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:21 np0005541603 nova_compute[189508]: 2025-12-01 22:24:21.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 17:24:21 np0005541603 nova_compute[189508]: 2025-12-01 22:24:21.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 17:24:21 np0005541603 nova_compute[189508]: 2025-12-01 22:24:21.218 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 17:24:21 np0005541603 nova_compute[189508]: 2025-12-01 22:24:21.219 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:21 np0005541603 nova_compute[189508]: 2025-12-01 22:24:21.220 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:21 np0005541603 python3.9[207092]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:21 np0005541603 systemd[1]: Started libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope.
Dec  1 17:24:22 np0005541603 podman[207093]: 2025-12-01 22:24:22.003828515 +0000 UTC m=+0.112871168 container exec ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 17:24:22 np0005541603 podman[207093]: 2025-12-01 22:24:22.009954142 +0000 UTC m=+0.118996805 container exec_died ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 17:24:22 np0005541603 systemd[1]: libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope: Deactivated successfully.
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.248 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.249 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.250 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.251 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.506 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.508 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5845MB free_disk=72.2566146850586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.508 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.509 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.637 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.638 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.711 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.727 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.729 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 17:24:22 np0005541603 nova_compute[189508]: 2025-12-01 22:24:22.730 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:24:22 np0005541603 python3.9[207276]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:22 np0005541603 systemd[1]: Started libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope.
Dec  1 17:24:22 np0005541603 podman[207277]: 2025-12-01 22:24:22.997083023 +0000 UTC m=+0.078051907 container exec ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 17:24:23 np0005541603 podman[207277]: 2025-12-01 22:24:23.028528237 +0000 UTC m=+0.109497131 container exec_died ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 17:24:23 np0005541603 systemd[1]: libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope: Deactivated successfully.
Dec  1 17:24:23 np0005541603 python3.9[207460]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:24 np0005541603 python3.9[207612]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  1 17:24:25 np0005541603 python3.9[207776]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:25 np0005541603 systemd[1]: Started libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope.
Dec  1 17:24:25 np0005541603 podman[207777]: 2025-12-01 22:24:25.818691767 +0000 UTC m=+0.109270445 container exec a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 17:24:25 np0005541603 podman[207777]: 2025-12-01 22:24:25.854014494 +0000 UTC m=+0.144593162 container exec_died a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec  1 17:24:25 np0005541603 systemd[1]: libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope: Deactivated successfully.
Dec  1 17:24:26 np0005541603 python3.9[207960]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:26 np0005541603 systemd[1]: Started libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope.
Dec  1 17:24:26 np0005541603 podman[207961]: 2025-12-01 22:24:26.870203601 +0000 UTC m=+0.103267772 container exec a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 17:24:26 np0005541603 podman[207961]: 2025-12-01 22:24:26.904803516 +0000 UTC m=+0.137867647 container exec_died a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:24:26 np0005541603 systemd[1]: libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope: Deactivated successfully.
Dec  1 17:24:27 np0005541603 python3.9[208144]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:28 np0005541603 python3.9[208296]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  1 17:24:29 np0005541603 python3.9[208460]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:29 np0005541603 systemd[1]: Started libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope.
Dec  1 17:24:29 np0005541603 podman[208461]: 2025-12-01 22:24:29.781485556 +0000 UTC m=+0.111403327 container exec f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 17:24:29 np0005541603 podman[208461]: 2025-12-01 22:24:29.81359404 +0000 UTC m=+0.143511821 container exec_died f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec  1 17:24:29 np0005541603 systemd[1]: libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope: Deactivated successfully.
Dec  1 17:24:30 np0005541603 podman[208494]: 2025-12-01 22:24:30.016465096 +0000 UTC m=+0.099417902 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 17:24:30 np0005541603 python3.9[208669]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:30 np0005541603 systemd[1]: Started libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope.
Dec  1 17:24:30 np0005541603 podman[208670]: 2025-12-01 22:24:30.909867151 +0000 UTC m=+0.119703065 container exec f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  1 17:24:30 np0005541603 podman[208670]: 2025-12-01 22:24:30.945682212 +0000 UTC m=+0.155518096 container exec_died f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 17:24:30 np0005541603 systemd[1]: libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope: Deactivated successfully.
Dec  1 17:24:32 np0005541603 python3.9[208852]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:32 np0005541603 podman[208976]: 2025-12-01 22:24:32.792157569 +0000 UTC m=+0.074814874 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  1 17:24:32 np0005541603 python3.9[209024]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  1 17:24:33 np0005541603 python3.9[209189]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:33 np0005541603 systemd[1]: Started libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope.
Dec  1 17:24:33 np0005541603 podman[209190]: 2025-12-01 22:24:33.985956018 +0000 UTC m=+0.105641441 container exec 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 17:24:34 np0005541603 podman[209190]: 2025-12-01 22:24:34.022957433 +0000 UTC m=+0.142642846 container exec_died 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 17:24:34 np0005541603 systemd[1]: libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope: Deactivated successfully.
Dec  1 17:24:34 np0005541603 podman[209344]: 2025-12-01 22:24:34.726011122 +0000 UTC m=+0.092617666 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 17:24:34 np0005541603 python3.9[209391]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:35 np0005541603 systemd[1]: Started libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope.
Dec  1 17:24:35 np0005541603 podman[209393]: 2025-12-01 22:24:35.077420273 +0000 UTC m=+0.111372006 container exec 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:24:35 np0005541603 podman[209393]: 2025-12-01 22:24:35.114781298 +0000 UTC m=+0.148732991 container exec_died 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:24:35 np0005541603 systemd[1]: libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope: Deactivated successfully.
Dec  1 17:24:36 np0005541603 python3.9[209577]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:36 np0005541603 python3.9[209729]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  1 17:24:37 np0005541603 python3.9[209895]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:37 np0005541603 systemd[1]: Started libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope.
Dec  1 17:24:37 np0005541603 podman[209896]: 2025-12-01 22:24:37.982682323 +0000 UTC m=+0.107252367 container exec 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 17:24:38 np0005541603 podman[209896]: 2025-12-01 22:24:38.018695129 +0000 UTC m=+0.143265163 container exec_died 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 17:24:38 np0005541603 systemd[1]: libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope: Deactivated successfully.
Dec  1 17:24:38 np0005541603 python3.9[210079]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:38 np0005541603 systemd[1]: Started libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope.
Dec  1 17:24:38 np0005541603 podman[210080]: 2025-12-01 22:24:38.983541451 +0000 UTC m=+0.087620162 container exec 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 17:24:39 np0005541603 podman[210080]: 2025-12-01 22:24:39.018920939 +0000 UTC m=+0.122999600 container exec_died 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:24:39 np0005541603 systemd[1]: libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope: Deactivated successfully.
Dec  1 17:24:39 np0005541603 python3.9[210263]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:40 np0005541603 podman[210387]: 2025-12-01 22:24:40.773357687 +0000 UTC m=+0.142730218 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 17:24:40 np0005541603 python3.9[210439]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  1 17:24:41 np0005541603 python3.9[210607]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:42 np0005541603 systemd[1]: Started libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope.
Dec  1 17:24:42 np0005541603 podman[210608]: 2025-12-01 22:24:42.043490772 +0000 UTC m=+0.102943763 container exec 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter)
Dec  1 17:24:42 np0005541603 podman[210608]: 2025-12-01 22:24:42.075659788 +0000 UTC m=+0.135112779 container exec_died 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec  1 17:24:42 np0005541603 systemd[1]: libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope: Deactivated successfully.
Dec  1 17:24:42 np0005541603 podman[210761]: 2025-12-01 22:24:42.80873999 +0000 UTC m=+0.085120630 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:24:42 np0005541603 python3.9[210808]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:24:43 np0005541603 systemd[1]: Started libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope.
Dec  1 17:24:43 np0005541603 podman[210809]: 2025-12-01 22:24:43.119641566 +0000 UTC m=+0.108397190 container exec 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec  1 17:24:43 np0005541603 podman[210809]: 2025-12-01 22:24:43.153997174 +0000 UTC m=+0.142752798 container exec_died 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm)
Dec  1 17:24:43 np0005541603 systemd[1]: libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope: Deactivated successfully.
Dec  1 17:24:43 np0005541603 python3.9[210992]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:44 np0005541603 python3.9[211144]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:45 np0005541603 podman[211268]: 2025-12-01 22:24:45.565798357 +0000 UTC m=+0.093256464 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec  1 17:24:45 np0005541603 python3.9[211314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:46 np0005541603 python3.9[211441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627884.9812999-1082-213837750789408/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:47 np0005541603 podman[211565]: 2025-12-01 22:24:47.220206248 +0000 UTC m=+0.077241013 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:24:47 np0005541603 python3.9[211617]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:48 np0005541603 python3.9[211769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:48 np0005541603 python3.9[211847]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:49 np0005541603 python3.9[211999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:50 np0005541603 python3.9[212077]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.t6ak3eje recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:50 np0005541603 python3.9[212229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:51 np0005541603 python3.9[212307]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:52 np0005541603 python3.9[212459]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:24:53 np0005541603 python3[212612]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 17:24:54 np0005541603 python3.9[212764]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:54 np0005541603 python3.9[212842]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:55 np0005541603 python3.9[212994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:56 np0005541603 python3.9[213072]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:57 np0005541603 python3.9[213224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:57 np0005541603 python3.9[213302]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:24:58 np0005541603 python3.9[213458]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:24:59 np0005541603 python3.9[213536]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:00 np0005541603 python3.9[213688]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:00 np0005541603 podman[213785]: 2025-12-01 22:25:00.668986069 +0000 UTC m=+0.064573879 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:25:00 np0005541603 python3.9[213836]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764627899.399089-1207-225678545807224/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:01 np0005541603 python3.9[213988]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:02 np0005541603 python3.9[214140]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:25:03 np0005541603 podman[214267]: 2025-12-01 22:25:03.322183537 +0000 UTC m=+0.075249686 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 17:25:03 np0005541603 python3.9[214313]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:04 np0005541603 python3.9[214465]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:25:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:25:04.595 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:25:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:25:04.596 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:25:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:25:04.596 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:25:05 np0005541603 podman[214590]: 2025-12-01 22:25:05.188736701 +0000 UTC m=+0.093863588 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  1 17:25:05 np0005541603 python3.9[214637]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:25:06 np0005541603 python3.9[214792]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:25:06 np0005541603 podman[203693]: time="2025-12-01T22:25:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 17:25:06 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:25:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Dec  1 17:25:06 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:25:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3407 "" "Go-http-client/1.1"
Dec  1 17:25:07 np0005541603 python3.9[214947]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:07 np0005541603 systemd[1]: session-25.scope: Deactivated successfully.
Dec  1 17:25:07 np0005541603 systemd[1]: session-25.scope: Consumed 2min 2.358s CPU time.
Dec  1 17:25:07 np0005541603 systemd-logind[788]: Session 25 logged out. Waiting for processes to exit.
Dec  1 17:25:07 np0005541603 systemd-logind[788]: Removed session 25.
Dec  1 17:25:08 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:25:08 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:08 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 17:25:08 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:25:08 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:08 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 17:25:08 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:25:08 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:08 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 17:25:08 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:25:11 np0005541603 podman[214979]: 2025-12-01 22:25:11.884904145 +0000 UTC m=+0.162299956 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 17:25:13 np0005541603 systemd-logind[788]: New session 26 of user zuul.
Dec  1 17:25:13 np0005541603 systemd[1]: Started Session 26 of User zuul.
Dec  1 17:25:13 np0005541603 podman[215010]: 2025-12-01 22:25:13.473904907 +0000 UTC m=+0.077707968 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 17:25:14 np0005541603 python3.9[215182]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:25:14 np0005541603 systemd[1]: Reloading.
Dec  1 17:25:14 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:25:14 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:25:15 np0005541603 podman[215341]: 2025-12-01 22:25:15.743575465 +0000 UTC m=+0.107971447 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec  1 17:25:15 np0005541603 python3.9[215377]: ansible-ansible.builtin.service_facts Invoked
Dec  1 17:25:15 np0005541603 network[215402]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 17:25:15 np0005541603 network[215403]: 'network-scripts' will be removed from distribution in near future.
Dec  1 17:25:15 np0005541603 network[215404]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 17:25:17 np0005541603 podman[215422]: 2025-12-01 22:25:17.390180801 +0000 UTC m=+0.093493817 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 17:25:20 np0005541603 nova_compute[189508]: 2025-12-01 22:25:20.730 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:20 np0005541603 nova_compute[189508]: 2025-12-01 22:25:20.731 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 17:25:20 np0005541603 python3.9[215702]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:25:21 np0005541603 nova_compute[189508]: 2025-12-01 22:25:21.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:21 np0005541603 nova_compute[189508]: 2025-12-01 22:25:21.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 17:25:21 np0005541603 nova_compute[189508]: 2025-12-01 22:25:21.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 17:25:21 np0005541603 nova_compute[189508]: 2025-12-01 22:25:21.235 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 17:25:21 np0005541603 nova_compute[189508]: 2025-12-01 22:25:21.236 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:21 np0005541603 nova_compute[189508]: 2025-12-01 22:25:21.237 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:21 np0005541603 nova_compute[189508]: 2025-12-01 22:25:21.238 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:22 np0005541603 nova_compute[189508]: 2025-12-01 22:25:22.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:22 np0005541603 nova_compute[189508]: 2025-12-01 22:25:22.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:23 np0005541603 python3.9[215856]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.411 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.411 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.411 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.411 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.567 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.568 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5841MB free_disk=72.25665283203125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.568 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.569 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.679 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.680 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.710 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.729 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.731 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 17:25:23 np0005541603 nova_compute[189508]: 2025-12-01 22:25:23.731 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:25:23 np0005541603 python3.9[216008]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:24 np0005541603 python3.9[216161]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:25:24 np0005541603 nova_compute[189508]: 2025-12-01 22:25:24.726 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:25:25 np0005541603 python3.9[216313]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:25:26 np0005541603 python3.9[216465]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:25:26 np0005541603 systemd[1]: Reloading.
Dec  1 17:25:26 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:25:26 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:25:27 np0005541603 python3.9[216652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:25:28 np0005541603 python3.9[216805]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:25:29 np0005541603 python3.9[216955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:25:29 np0005541603 podman[203693]: time="2025-12-01T22:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 17:25:29 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22543 "" "Go-http-client/1.1"
Dec  1 17:25:29 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3419 "" "Go-http-client/1.1"
Dec  1 17:25:30 np0005541603 python3.9[217110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:30 np0005541603 podman[217158]: 2025-12-01 22:25:30.823854766 +0000 UTC m=+0.096175005 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 17:25:31 np0005541603 python3.9[217255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627929.8406725-125-193055661318898/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:25:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 17:25:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:25:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:25:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 17:25:31 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:25:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 17:25:31 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:25:32 np0005541603 python3.9[217407]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  1 17:25:33 np0005541603 podman[217532]: 2025-12-01 22:25:33.4830505 +0000 UTC m=+0.085286429 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 17:25:33 np0005541603 python3.9[217569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:34 np0005541603 python3.9[217699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764627933.1034873-171-121284817080537/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:35 np0005541603 python3.9[217849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.260 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.261 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.261 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.262 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.263 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.264 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.265 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.268 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.269 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09d1a30>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.272 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.273 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.274 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 ceilometer_agent_compute[200237]: 2025-12-01 22:25:35.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 17:25:35 np0005541603 podman[217945]: 2025-12-01 22:25:35.553348864 +0000 UTC m=+0.086324399 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 17:25:35 np0005541603 python3.9[217984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764627934.5686743-171-30813336224010/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:36 np0005541603 python3.9[218141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:37 np0005541603 python3.9[218262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764627936.0447316-171-4138596778321/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:37 np0005541603 python3.9[218412]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:25:38 np0005541603 python3.9[218564]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:25:39 np0005541603 python3.9[218716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:40 np0005541603 python3.9[218837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627939.008176-230-259578664960386/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:41 np0005541603 python3.9[218987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:41 np0005541603 python3.9[219063]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:42 np0005541603 podman[219187]: 2025-12-01 22:25:42.243735479 +0000 UTC m=+0.170004170 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 17:25:42 np0005541603 python3.9[219226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:43 np0005541603 python3.9[219360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627941.7183268-230-214949547983259/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:43 np0005541603 podman[219484]: 2025-12-01 22:25:43.625386527 +0000 UTC m=+0.083335232 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 17:25:43 np0005541603 python3.9[219521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:44 np0005541603 python3.9[219650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627943.2047844-230-207476639699890/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:45 np0005541603 python3.9[219800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:45 np0005541603 python3.9[219921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627944.5716648-230-192969486599851/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:45 np0005541603 podman[219922]: 2025-12-01 22:25:45.884344082 +0000 UTC m=+0.074849116 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec  1 17:25:46 np0005541603 python3.9[220092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:47 np0005541603 python3.9[220213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764627945.9625876-230-12705315920451/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:47 np0005541603 podman[220337]: 2025-12-01 22:25:47.779061836 +0000 UTC m=+0.080158310 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 17:25:47 np0005541603 python3.9[220373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:48 np0005541603 python3.9[220460]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:49 np0005541603 python3.9[220612]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:50 np0005541603 python3.9[220764]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:25:51 np0005541603 python3.9[220916]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:25:51 np0005541603 python3.9[221068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:52 np0005541603 python3.9[221191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627951.2620866-349-506971112263/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:25:52 np0005541603 python3.9[221267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:53 np0005541603 python3.9[221390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627951.2620866-349-506971112263/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:25:54 np0005541603 python3.9[221542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:25:55 np0005541603 python3.9[221665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764627953.832116-349-239859856708458/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 17:25:56 np0005541603 python3.9[221817]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  1 17:25:57 np0005541603 python3.9[221969]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:25:58 np0005541603 python3[222121]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:25:58 np0005541603 podman[222159]: 2025-12-01 22:25:58.998096484 +0000 UTC m=+0.073201848 container create 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 17:25:59 np0005541603 podman[222159]: 2025-12-01 22:25:58.95699744 +0000 UTC m=+0.032102824 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  1 17:25:59 np0005541603 python3[222121]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  1 17:25:59 np0005541603 podman[203693]: time="2025-12-01T22:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 17:25:59 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25322 "" "Go-http-client/1.1"
Dec  1 17:25:59 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3416 "" "Go-http-client/1.1"
Dec  1 17:26:00 np0005541603 python3.9[222350]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:26:01 np0005541603 python3.9[222504]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 17:26:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:26:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:26:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 17:26:01 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:26:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 17:26:01 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:26:01 np0005541603 podman[222627]: 2025-12-01 22:26:01.733904552 +0000 UTC m=+0.088345417 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 17:26:01 np0005541603 python3.9[222673]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627961.0913045-427-187348222486048/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:02 np0005541603 python3.9[222755]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:26:02 np0005541603 systemd[1]: Reloading.
Dec  1 17:26:03 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:26:03 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:26:03 np0005541603 podman[222838]: 2025-12-01 22:26:03.594092833 +0000 UTC m=+0.060266732 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 17:26:03 np0005541603 python3.9[222886]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:26:03 np0005541603 systemd[1]: Reloading.
Dec  1 17:26:04 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:26:04 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:26:04 np0005541603 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  1 17:26:04 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:26:04 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:04 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:04 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:04 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:04 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.
Dec  1 17:26:04 np0005541603 podman[222926]: 2025-12-01 22:26:04.550940341 +0000 UTC m=+0.194063299 container init 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + sudo -E kolla_set_configs
Dec  1 17:26:04 np0005541603 podman[222926]: 2025-12-01 22:26:04.590368927 +0000 UTC m=+0.233491885 container start 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  1 17:26:04 np0005541603 podman[222926]: ceilometer_agent_ipmi
Dec  1 17:26:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:26:04.596 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:26:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:26:04.599 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:26:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:26:04.599 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:26:04 np0005541603 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Validating config file
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Copying service configuration files
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: INFO:__main__:Writing out command to execute
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: ++ cat /run_command
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + ARGS=
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + sudo kolla_copy_cacerts
Dec  1 17:26:04 np0005541603 podman[222949]: 2025-12-01 22:26:04.691162725 +0000 UTC m=+0.079168061 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 17:26:04 np0005541603 systemd[1]: 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-27ae0c09ffac358a.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:26:04 np0005541603 systemd[1]: 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-27ae0c09ffac358a.service: Failed with result 'exit-code'.
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + [[ ! -n '' ]]
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + . kolla_extend_start
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + umask 0022
Dec  1 17:26:04 np0005541603 ceilometer_agent_ipmi[222942]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  1 17:26:05 np0005541603 python3.9[223122]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.624 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.625 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.625 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.625 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.625 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.625 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.625 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.625 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.626 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.627 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.628 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.629 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.630 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.631 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.632 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.633 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.634 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.635 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.636 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.637 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.638 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.639 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.640 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.641 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.659 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.661 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.662 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 17:26:05 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:05.758 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpxsc91ldy/privsep.sock']
Dec  1 17:26:05 np0005541603 podman[223150]: 2025-12-01 22:26:05.834697176 +0000 UTC m=+0.095887986 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.459 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.460 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxsc91ldy/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.329 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.336 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.340 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.340 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  1 17:26:06 np0005541603 python3.9[223303]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.595 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.596 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.599 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.599 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.599 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.599 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.599 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.600 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.600 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.605 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.605 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.605 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.605 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.605 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.606 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.606 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.606 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.606 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.606 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.607 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.607 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.607 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.607 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.608 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.608 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.608 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.608 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.609 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.609 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.609 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.609 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.609 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.609 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.610 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.610 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.610 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.610 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.610 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.610 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.611 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.611 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.611 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.611 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.611 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.611 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.612 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.612 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.612 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.612 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.612 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.613 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.613 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.613 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.613 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.613 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.613 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.614 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.614 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.614 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.614 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.614 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.615 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.615 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.615 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.615 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.615 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.616 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.616 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.616 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.616 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.616 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.617 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.617 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.617 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.617 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.617 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.617 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.618 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.618 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.618 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.618 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.618 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.623 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.623 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.623 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.623 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.623 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.625 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.626 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.626 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.626 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.627 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.627 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.627 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.627 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.627 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.629 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.629 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.629 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.629 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.629 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.630 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.631 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.631 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.631 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.631 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.631 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.643 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.643 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.643 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.643 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  1 17:26:06 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:06.645 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  1 17:26:07 np0005541603 python3[223459]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 17:26:07 np0005541603 podman[223496]: 2025-12-01 22:26:07.996054596 +0000 UTC m=+0.082918350 container create c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.expose-services=, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec  1 17:26:07 np0005541603 podman[223496]: 2025-12-01 22:26:07.955052675 +0000 UTC m=+0.041916469 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  1 17:26:08 np0005541603 python3[223459]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  1 17:26:08 np0005541603 python3.9[223686]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 17:26:09 np0005541603 python3.9[223840]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:10 np0005541603 python3.9[223991]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764627970.0686734-489-211237978829487/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:11 np0005541603 python3.9[224067]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 17:26:11 np0005541603 systemd[1]: Reloading.
Dec  1 17:26:11 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:26:11 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:26:12 np0005541603 python3.9[224177]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 17:26:12 np0005541603 systemd[1]: Reloading.
Dec  1 17:26:12 np0005541603 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 17:26:12 np0005541603 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 17:26:12 np0005541603 podman[224179]: 2025-12-01 22:26:12.682426708 +0000 UTC m=+0.122024613 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 17:26:12 np0005541603 systemd[1]: Starting kepler container...
Dec  1 17:26:12 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:26:13 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.
Dec  1 17:26:13 np0005541603 podman[224241]: 2025-12-01 22:26:13.045977531 +0000 UTC m=+0.153781434 container init c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., container_name=kepler, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 17:26:13 np0005541603 kepler[224258]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.074876       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.075011       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.075037       1 config.go:295] kernel version: 5.14
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.075781       1 power.go:78] Unable to obtain power, use estimate method
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.075802       1 redfish.go:169] failed to get redfish credential file path
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.076107       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.076116       1 power.go:79] using none to obtain power
Dec  1 17:26:13 np0005541603 kepler[224258]: E1201 22:26:13.076129       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  1 17:26:13 np0005541603 kepler[224258]: E1201 22:26:13.076147       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  1 17:26:13 np0005541603 kepler[224258]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.077673       1 exporter.go:84] Number of CPUs: 8
Dec  1 17:26:13 np0005541603 podman[224241]: 2025-12-01 22:26:13.083974732 +0000 UTC m=+0.191778625 container start c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, name=ubi9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Dec  1 17:26:13 np0005541603 podman[224241]: kepler
Dec  1 17:26:13 np0005541603 systemd[1]: Started kepler container.
Dec  1 17:26:13 np0005541603 podman[224270]: 2025-12-01 22:26:13.204342956 +0000 UTC m=+0.101766472 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, name=ubi9, container_name=kepler, vcs-type=git, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Dec  1 17:26:13 np0005541603 systemd[1]: c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2-281e6eb2bcd99231.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:26:13 np0005541603 systemd[1]: c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2-281e6eb2bcd99231.service: Failed with result 'exit-code'.
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.676060       1 watcher.go:83] Using in cluster k8s config
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.676122       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  1 17:26:13 np0005541603 kepler[224258]: E1201 22:26:13.676211       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.683776       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.683837       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.692631       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.692688       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.706748       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.706813       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.706837       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.724822       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.724877       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.724886       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.724895       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.724905       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.724924       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.725679       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.725718       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.725750       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.725777       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.725976       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  1 17:26:13 np0005541603 kepler[224258]: I1201 22:26:13.726834       1 exporter.go:208] Started Kepler in 652.149844ms
Dec  1 17:26:13 np0005541603 podman[224424]: 2025-12-01 22:26:13.847933414 +0000 UTC m=+0.113658392 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 17:26:14 np0005541603 python3.9[224472]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:26:14 np0005541603 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  1 17:26:14 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:14.332 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  1 17:26:14 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:14.435 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  1 17:26:14 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:14.435 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  1 17:26:14 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:14.436 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  1 17:26:14 np0005541603 ceilometer_agent_ipmi[222942]: 2025-12-01 22:26:14.451 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  1 17:26:14 np0005541603 systemd[1]: libpod-1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.scope: Deactivated successfully.
Dec  1 17:26:14 np0005541603 systemd[1]: libpod-1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.scope: Consumed 2.376s CPU time.
Dec  1 17:26:14 np0005541603 podman[224477]: 2025-12-01 22:26:14.629188295 +0000 UTC m=+0.374432247 container died 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Dec  1 17:26:14 np0005541603 systemd[1]: 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-27ae0c09ffac358a.timer: Deactivated successfully.
Dec  1 17:26:14 np0005541603 systemd[1]: Stopped /usr/bin/podman healthcheck run 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.
Dec  1 17:26:14 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-userdata-shm.mount: Deactivated successfully.
Dec  1 17:26:14 np0005541603 systemd[1]: var-lib-containers-storage-overlay-ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a-merged.mount: Deactivated successfully.
Dec  1 17:26:14 np0005541603 podman[224477]: 2025-12-01 22:26:14.733787817 +0000 UTC m=+0.479031769 container cleanup 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:26:14 np0005541603 podman[224477]: ceilometer_agent_ipmi
Dec  1 17:26:14 np0005541603 podman[224504]: ceilometer_agent_ipmi
Dec  1 17:26:14 np0005541603 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  1 17:26:14 np0005541603 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  1 17:26:14 np0005541603 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  1 17:26:15 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:26:15 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:15 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:15 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:15 np0005541603 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc2cb680775a67bcccb01e035ac0989c22a93d92dc7a1e43fb32b826bd75a6a/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 17:26:15 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.
Dec  1 17:26:15 np0005541603 podman[224515]: 2025-12-01 22:26:15.137977296 +0000 UTC m=+0.259202889 container init 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + sudo -E kolla_set_configs
Dec  1 17:26:15 np0005541603 podman[224515]: 2025-12-01 22:26:15.200354476 +0000 UTC m=+0.321580139 container start 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:26:15 np0005541603 podman[224515]: ceilometer_agent_ipmi
Dec  1 17:26:15 np0005541603 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Validating config file
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Copying service configuration files
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: INFO:__main__:Writing out command to execute
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: ++ cat /run_command
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + ARGS=
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + sudo kolla_copy_cacerts
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + [[ ! -n '' ]]
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + . kolla_extend_start
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + umask 0022
Dec  1 17:26:15 np0005541603 ceilometer_agent_ipmi[224531]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  1 17:26:15 np0005541603 podman[224538]: 2025-12-01 22:26:15.353543262 +0000 UTC m=+0.130160016 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 17:26:15 np0005541603 systemd[1]: 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-69dbd50e3eaaefa4.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:26:15 np0005541603 systemd[1]: 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-69dbd50e3eaaefa4.service: Failed with result 'exit-code'.
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.169 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.169 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.169 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.169 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.169 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.169 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.170 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.171 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.172 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.173 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.174 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.176 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.177 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.178 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.179 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.180 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.181 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.182 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.183 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.184 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.184 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.207 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.210 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.211 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.238 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp4g3_1xfs/privsep.sock']
Dec  1 17:26:16 np0005541603 podman[224690]: 2025-12-01 22:26:16.422152829 +0000 UTC m=+0.122220708 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec  1 17:26:16 np0005541603 python3.9[224739]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 17:26:16 np0005541603 systemd[1]: Stopping kepler container...
Dec  1 17:26:16 np0005541603 kepler[224258]: I1201 22:26:16.878331       1 exporter.go:218] Received shutdown signal
Dec  1 17:26:16 np0005541603 kepler[224258]: I1201 22:26:16.879144       1 exporter.go:226] Exiting...
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.972 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.973 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4g3_1xfs/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.838 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.844 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.847 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  1 17:26:16 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:16.847 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  1 17:26:17 np0005541603 systemd[1]: libpod-c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.scope: Deactivated successfully.
Dec  1 17:26:17 np0005541603 podman[224746]: 2025-12-01 22:26:17.069103255 +0000 UTC m=+0.260499577 container died c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, version=9.4, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.084 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.084 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 systemd[1]: c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2-281e6eb2bcd99231.timer: Deactivated successfully.
Dec  1 17:26:17 np0005541603 systemd[1]: Stopped /usr/bin/podman healthcheck run c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.086 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.087 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.087 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.087 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.087 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.088 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.088 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.088 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.088 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.089 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.089 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.095 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.095 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.095 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.095 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.096 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.096 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.096 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.096 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.097 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.097 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.097 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.097 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.097 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.098 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.098 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.098 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.099 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.099 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.099 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.099 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.099 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.100 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.100 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.100 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.100 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.101 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.101 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.101 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.101 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.102 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.102 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.102 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.102 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.102 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.103 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.103 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.103 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.103 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.104 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 systemd[1]: var-lib-containers-storage-overlay-9aefce7bf656c5ce0f68b4aa340e76c69811d8e05c378db3feb43f4077c9a829-merged.mount: Deactivated successfully.
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.104 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.104 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.104 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.104 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2-userdata-shm.mount: Deactivated successfully.
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.105 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.105 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.105 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.105 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.106 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.106 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.106 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.106 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.106 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.107 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.107 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.107 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.107 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.107 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.108 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.108 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.108 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.108 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.108 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.109 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.109 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.109 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.109 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.110 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.110 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.110 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.110 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.111 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.111 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.111 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.111 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.112 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.112 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.112 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.112 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.112 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.113 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 podman[224746]: 2025-12-01 22:26:17.113264522 +0000 UTC m=+0.304660844 container cleanup c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release=1214.1726694543, config_id=edpm, release-0.7.12=, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Dec  1 17:26:17 np0005541603 podman[224746]: kepler
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.113 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.113 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.113 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.113 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.114 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.114 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.114 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.114 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.114 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.114 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.115 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.115 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.115 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.115 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.115 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.116 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.116 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.116 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.116 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.116 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.117 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.117 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.117 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.117 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.117 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.118 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.118 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.118 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.118 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.118 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.119 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.119 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.119 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.119 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.119 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.120 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.120 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.120 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.120 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.121 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.121 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.121 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.121 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.121 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.122 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.122 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.122 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.122 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.123 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.123 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.123 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.123 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.123 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.123 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.124 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.124 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.124 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.124 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.124 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.125 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.125 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.125 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.125 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.125 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.125 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.126 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.126 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.126 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.126 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.126 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.133 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.134 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.134 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.134 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.134 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.134 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  1 17:26:17 np0005541603 ceilometer_agent_ipmi[224531]: 2025-12-01 22:26:17.137 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  1 17:26:17 np0005541603 podman[224778]: kepler
Dec  1 17:26:17 np0005541603 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  1 17:26:17 np0005541603 systemd[1]: Stopped kepler container.
Dec  1 17:26:17 np0005541603 systemd[1]: Starting kepler container...
Dec  1 17:26:17 np0005541603 systemd[1]: Started libcrun container.
Dec  1 17:26:17 np0005541603 systemd[1]: Started /usr/bin/podman healthcheck run c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.
Dec  1 17:26:17 np0005541603 podman[224792]: 2025-12-01 22:26:17.393671398 +0000 UTC m=+0.150661294 container init c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, release=1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, vcs-type=git)
Dec  1 17:26:17 np0005541603 kepler[224805]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.435746       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.435962       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.435999       1 config.go:295] kernel version: 5.14
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.436823       1 power.go:78] Unable to obtain power, use estimate method
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.436878       1 redfish.go:169] failed to get redfish credential file path
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.437568       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.437589       1 power.go:79] using none to obtain power
Dec  1 17:26:17 np0005541603 kepler[224805]: E1201 22:26:17.437619       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  1 17:26:17 np0005541603 kepler[224805]: E1201 22:26:17.437660       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  1 17:26:17 np0005541603 kepler[224805]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 17:26:17 np0005541603 podman[224792]: 2025-12-01 22:26:17.439621056 +0000 UTC m=+0.196610902 container start c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release-0.7.12=, container_name=kepler, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  1 17:26:17 np0005541603 kepler[224805]: I1201 22:26:17.441118       1 exporter.go:84] Number of CPUs: 8
Dec  1 17:26:17 np0005541603 podman[224792]: kepler
Dec  1 17:26:17 np0005541603 systemd[1]: Started kepler container.
Dec  1 17:26:17 np0005541603 podman[224815]: 2025-12-01 22:26:17.581891299 +0000 UTC m=+0.126310546 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, name=ubi9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, vcs-type=git, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 17:26:17 np0005541603 systemd[1]: c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2-5da7831d8699b0a7.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:26:17 np0005541603 systemd[1]: c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2-5da7831d8699b0a7.service: Failed with result 'exit-code'.
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.058793       1 watcher.go:83] Using in cluster k8s config
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.058819       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  1 17:26:18 np0005541603 kepler[224805]: E1201 22:26:18.059418       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.068692       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.068758       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.076886       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.076952       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  1 17:26:18 np0005541603 podman[224966]: 2025-12-01 22:26:18.079357696 +0000 UTC m=+0.100983100 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.094270       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.094425       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.094453       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.106958       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107000       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107007       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107014       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107023       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107037       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107138       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107177       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107203       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107227       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.107431       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  1 17:26:18 np0005541603 kepler[224805]: I1201 22:26:18.109381       1 exporter.go:208] Started Kepler in 674.013683ms
Dec  1 17:26:18 np0005541603 python3.9[225014]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 17:26:19 np0005541603 python3.9[225181]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  1 17:26:20 np0005541603 nova_compute[189508]: 2025-12-01 22:26:20.193 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:21 np0005541603 nova_compute[189508]: 2025-12-01 22:26:21.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:21 np0005541603 nova_compute[189508]: 2025-12-01 22:26:21.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:21 np0005541603 nova_compute[189508]: 2025-12-01 22:26:21.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 17:26:21 np0005541603 python3.9[225346]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:21 np0005541603 systemd[1]: Started libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope.
Dec  1 17:26:21 np0005541603 podman[225347]: 2025-12-01 22:26:21.567394574 +0000 UTC m=+0.163734390 container exec 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 17:26:21 np0005541603 podman[225347]: 2025-12-01 22:26:21.603971184 +0000 UTC m=+0.200310970 container exec_died 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 17:26:21 np0005541603 systemd[1]: libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope: Deactivated successfully.
Dec  1 17:26:22 np0005541603 nova_compute[189508]: 2025-12-01 22:26:22.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:22 np0005541603 python3.9[225527]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:22 np0005541603 systemd[1]: Started libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope.
Dec  1 17:26:22 np0005541603 podman[225528]: 2025-12-01 22:26:22.950431084 +0000 UTC m=+0.143809708 container exec 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 17:26:22 np0005541603 podman[225528]: 2025-12-01 22:26:22.984500972 +0000 UTC m=+0.177879576 container exec_died 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  1 17:26:23 np0005541603 systemd[1]: libpod-conmon-6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367.scope: Deactivated successfully.
Dec  1 17:26:23 np0005541603 nova_compute[189508]: 2025-12-01 22:26:23.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:23 np0005541603 nova_compute[189508]: 2025-12-01 22:26:23.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 17:26:23 np0005541603 nova_compute[189508]: 2025-12-01 22:26:23.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 17:26:23 np0005541603 nova_compute[189508]: 2025-12-01 22:26:23.224 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 17:26:23 np0005541603 nova_compute[189508]: 2025-12-01 22:26:23.226 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:23 np0005541603 nova_compute[189508]: 2025-12-01 22:26:23.226 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:24 np0005541603 python3.9[225709]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:24 np0005541603 nova_compute[189508]: 2025-12-01 22:26:24.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.241 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.242 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.243 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.244 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 17:26:25 np0005541603 python3.9[225861]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.731 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.734 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5679MB free_disk=72.25837326049805GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.735 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.736 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.819 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.819 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.843 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.855 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.857 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 17:26:25 np0005541603 nova_compute[189508]: 2025-12-01 22:26:25.858 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:26:26 np0005541603 python3.9[226027]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:26 np0005541603 systemd[1]: Started libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope.
Dec  1 17:26:26 np0005541603 podman[226028]: 2025-12-01 22:26:26.775169085 +0000 UTC m=+0.151403446 container exec ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 17:26:26 np0005541603 podman[226028]: 2025-12-01 22:26:26.812433725 +0000 UTC m=+0.188668026 container exec_died ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:26:26 np0005541603 systemd[1]: libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope: Deactivated successfully.
Dec  1 17:26:28 np0005541603 python3.9[226207]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:28 np0005541603 systemd[1]: Started libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope.
Dec  1 17:26:28 np0005541603 podman[226208]: 2025-12-01 22:26:28.281564104 +0000 UTC m=+0.143502859 container exec ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  1 17:26:28 np0005541603 podman[226208]: 2025-12-01 22:26:28.314672285 +0000 UTC m=+0.176611000 container exec_died ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 17:26:28 np0005541603 systemd[1]: libpod-conmon-ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4.scope: Deactivated successfully.
Dec  1 17:26:29 np0005541603 python3.9[226389]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:29 np0005541603 podman[203693]: time="2025-12-01T22:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 17:26:29 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28293 "" "Go-http-client/1.1"
Dec  1 17:26:29 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4260 "" "Go-http-client/1.1"
Dec  1 17:26:30 np0005541603 python3.9[226543]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  1 17:26:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:26:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 17:26:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:26:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 17:26:31 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:26:31 np0005541603 openstack_network_exporter[205887]: ERROR   22:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 17:26:31 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:26:31 np0005541603 podman[226708]: 2025-12-01 22:26:31.897912605 +0000 UTC m=+0.100878936 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:26:32 np0005541603 python3.9[226709]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:32 np0005541603 systemd[1]: Started libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope.
Dec  1 17:26:32 np0005541603 podman[226734]: 2025-12-01 22:26:32.186653511 +0000 UTC m=+0.155432741 container exec a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  1 17:26:32 np0005541603 podman[226734]: 2025-12-01 22:26:32.221585134 +0000 UTC m=+0.190364374 container exec_died a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  1 17:26:32 np0005541603 systemd[1]: libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope: Deactivated successfully.
Dec  1 17:26:33 np0005541603 python3.9[226914]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:33 np0005541603 systemd[1]: Started libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope.
Dec  1 17:26:33 np0005541603 podman[226915]: 2025-12-01 22:26:33.436705725 +0000 UTC m=+0.135473658 container exec a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec  1 17:26:33 np0005541603 podman[226915]: 2025-12-01 22:26:33.471258637 +0000 UTC m=+0.170026600 container exec_died a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 17:26:33 np0005541603 systemd[1]: libpod-conmon-a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8.scope: Deactivated successfully.
Dec  1 17:26:33 np0005541603 podman[226971]: 2025-12-01 22:26:33.885530626 +0000 UTC m=+0.145351503 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  1 17:26:34 np0005541603 python3.9[227119]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:35 np0005541603 python3.9[227271]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  1 17:26:36 np0005541603 podman[227408]: 2025-12-01 22:26:36.736344977 +0000 UTC m=+0.154539196 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 17:26:36 np0005541603 python3.9[227455]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:37 np0005541603 systemd[1]: Started libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope.
Dec  1 17:26:37 np0005541603 podman[227457]: 2025-12-01 22:26:37.14170465 +0000 UTC m=+0.156505532 container exec f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 17:26:37 np0005541603 podman[227457]: 2025-12-01 22:26:37.177561439 +0000 UTC m=+0.192362291 container exec_died f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  1 17:26:37 np0005541603 systemd[1]: libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope: Deactivated successfully.
Dec  1 17:26:38 np0005541603 python3.9[227637]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:38 np0005541603 systemd[1]: Started libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope.
Dec  1 17:26:38 np0005541603 podman[227638]: 2025-12-01 22:26:38.453962229 +0000 UTC m=+0.123535496 container exec f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 17:26:38 np0005541603 podman[227638]: 2025-12-01 22:26:38.463814872 +0000 UTC m=+0.133388139 container exec_died f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec  1 17:26:38 np0005541603 systemd[1]: libpod-conmon-f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe.scope: Deactivated successfully.
Dec  1 17:26:39 np0005541603 python3.9[227818]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:40 np0005541603 python3.9[227970]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  1 17:26:41 np0005541603 python3.9[228133]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:41 np0005541603 systemd[1]: Started libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope.
Dec  1 17:26:41 np0005541603 podman[228134]: 2025-12-01 22:26:41.994650348 +0000 UTC m=+0.133099110 container exec 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 17:26:42 np0005541603 podman[228134]: 2025-12-01 22:26:42.030382304 +0000 UTC m=+0.168831036 container exec_died 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 17:26:42 np0005541603 systemd[1]: libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope: Deactivated successfully.
Dec  1 17:26:43 np0005541603 podman[228315]: 2025-12-01 22:26:43.079902452 +0000 UTC m=+0.182974182 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:26:43 np0005541603 python3.9[228316]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:43 np0005541603 systemd[1]: Started libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope.
Dec  1 17:26:43 np0005541603 podman[228342]: 2025-12-01 22:26:43.255261134 +0000 UTC m=+0.130477296 container exec 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 17:26:43 np0005541603 podman[228342]: 2025-12-01 22:26:43.290917868 +0000 UTC m=+0.166133929 container exec_died 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 17:26:43 np0005541603 systemd[1]: libpod-conmon-12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d.scope: Deactivated successfully.
Dec  1 17:26:44 np0005541603 podman[228495]: 2025-12-01 22:26:44.12103442 +0000 UTC m=+0.112564531 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 17:26:44 np0005541603 python3.9[228543]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:45 np0005541603 python3.9[228695]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  1 17:26:45 np0005541603 podman[228721]: 2025-12-01 22:26:45.804643236 +0000 UTC m=+0.069129814 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:26:45 np0005541603 systemd[1]: 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-69dbd50e3eaaefa4.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 17:26:45 np0005541603 systemd[1]: 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841-69dbd50e3eaaefa4.service: Failed with result 'exit-code'.
Dec  1 17:26:46 np0005541603 podman[228877]: 2025-12-01 22:26:46.595177602 +0000 UTC m=+0.105272022 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 17:26:46 np0005541603 python3.9[228878]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:46 np0005541603 systemd[1]: Started libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope.
Dec  1 17:26:46 np0005541603 podman[228896]: 2025-12-01 22:26:46.896200621 +0000 UTC m=+0.152246560 container exec 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 17:26:46 np0005541603 podman[228896]: 2025-12-01 22:26:46.931403191 +0000 UTC m=+0.187449130 container exec_died 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 17:26:46 np0005541603 systemd[1]: libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope: Deactivated successfully.
Dec  1 17:26:47 np0005541603 podman[229037]: 2025-12-01 22:26:47.863417388 +0000 UTC m=+0.134668486 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_id=edpm, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, version=9.4, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  1 17:26:48 np0005541603 python3.9[229095]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:48 np0005541603 systemd[1]: Started libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope.
Dec  1 17:26:48 np0005541603 podman[229096]: 2025-12-01 22:26:48.387631342 +0000 UTC m=+0.146877406 container exec 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 17:26:48 np0005541603 podman[229096]: 2025-12-01 22:26:48.422976576 +0000 UTC m=+0.182222570 container exec_died 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 17:26:48 np0005541603 systemd[1]: libpod-conmon-8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1.scope: Deactivated successfully.
Dec  1 17:26:48 np0005541603 podman[229110]: 2025-12-01 22:26:48.513181615 +0000 UTC m=+0.136254961 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 17:26:49 np0005541603 python3.9[229300]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:50 np0005541603 python3.9[229453]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  1 17:26:51 np0005541603 python3.9[229618]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:51 np0005541603 systemd[1]: Started libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope.
Dec  1 17:26:51 np0005541603 podman[229619]: 2025-12-01 22:26:51.909885952 +0000 UTC m=+0.123072603 container exec 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  1 17:26:51 np0005541603 podman[229619]: 2025-12-01 22:26:51.946715779 +0000 UTC m=+0.159902420 container exec_died 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 17:26:52 np0005541603 systemd[1]: libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope: Deactivated successfully.
Dec  1 17:26:52 np0005541603 python3.9[229800]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:53 np0005541603 systemd[1]: Started libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope.
Dec  1 17:26:53 np0005541603 podman[229801]: 2025-12-01 22:26:53.127213125 +0000 UTC m=+0.156091299 container exec 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 17:26:53 np0005541603 podman[229801]: 2025-12-01 22:26:53.162722204 +0000 UTC m=+0.191600378 container exec_died 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Dec  1 17:26:53 np0005541603 systemd[1]: libpod-conmon-9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74.scope: Deactivated successfully.
Dec  1 17:26:54 np0005541603 python3.9[229983]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:55 np0005541603 python3.9[230135]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  1 17:26:56 np0005541603 python3.9[230301]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:56 np0005541603 systemd[1]: Started libpod-conmon-1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.scope.
Dec  1 17:26:56 np0005541603 podman[230302]: 2025-12-01 22:26:56.67613607 +0000 UTC m=+0.163542973 container exec 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  1 17:26:56 np0005541603 podman[230302]: 2025-12-01 22:26:56.71027529 +0000 UTC m=+0.197682123 container exec_died 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 17:26:56 np0005541603 systemd[1]: libpod-conmon-1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.scope: Deactivated successfully.
Dec  1 17:26:57 np0005541603 python3.9[230482]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:26:57 np0005541603 systemd[1]: Started libpod-conmon-1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.scope.
Dec  1 17:26:57 np0005541603 podman[230483]: 2025-12-01 22:26:57.996852382 +0000 UTC m=+0.130417144 container exec 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 17:26:58 np0005541603 podman[230483]: 2025-12-01 22:26:58.028422818 +0000 UTC m=+0.161987560 container exec_died 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  1 17:26:58 np0005541603 systemd[1]: libpod-conmon-1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841.scope: Deactivated successfully.
Dec  1 17:26:59 np0005541603 python3.9[230664]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:26:59 np0005541603 podman[203693]: time="2025-12-01T22:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 17:26:59 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  1 17:26:59 np0005541603 podman[203693]: @ - - [01/Dec/2025:22:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4259 "" "Go-http-client/1.1"
Dec  1 17:27:00 np0005541603 python3.9[230816]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  1 17:27:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 17:27:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:27:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 17:27:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 17:27:01 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:27:01 np0005541603 openstack_network_exporter[205887]: ERROR   22:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 17:27:01 np0005541603 openstack_network_exporter[205887]: 
Dec  1 17:27:01 np0005541603 python3.9[230981]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:27:01 np0005541603 systemd[1]: Started libpod-conmon-c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.scope.
Dec  1 17:27:01 np0005541603 podman[230982]: 2025-12-01 22:27:01.756227867 +0000 UTC m=+0.147208745 container exec c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=edpm, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 17:27:01 np0005541603 podman[230982]: 2025-12-01 22:27:01.79290965 +0000 UTC m=+0.183890478 container exec_died c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, version=9.4)
Dec  1 17:27:01 np0005541603 systemd[1]: libpod-conmon-c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.scope: Deactivated successfully.
Dec  1 17:27:02 np0005541603 podman[231136]: 2025-12-01 22:27:02.72188692 +0000 UTC m=+0.113554180 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 17:27:02 np0005541603 python3.9[231180]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 17:27:03 np0005541603 systemd[1]: Started libpod-conmon-c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.scope.
Dec  1 17:27:03 np0005541603 podman[231188]: 2025-12-01 22:27:03.074710915 +0000 UTC m=+0.128665923 container exec c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec  1 17:27:03 np0005541603 podman[231188]: 2025-12-01 22:27:03.109113252 +0000 UTC m=+0.163068250 container exec_died c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc.)
Dec  1 17:27:03 np0005541603 systemd[1]: libpod-conmon-c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2.scope: Deactivated successfully.
Dec  1 17:27:04 np0005541603 podman[231368]: 2025-12-01 22:27:04.101235203 +0000 UTC m=+0.116531515 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 17:27:04 np0005541603 python3.9[231369]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:27:04.597 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 17:27:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:27:04.599 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 17:27:04 np0005541603 ovn_metadata_agent[106657]: 2025-12-01 22:27:04.599 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 17:27:05 np0005541603 python3.9[231538]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:06 np0005541603 python3.9[231690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:27:07 np0005541603 podman[231785]: 2025-12-01 22:27:07.224999729 +0000 UTC m=+0.136363834 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 17:27:07 np0005541603 python3.9[231829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764628025.627781-844-14916453896014/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:08 np0005541603 python3.9[231982]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:09 np0005541603 python3.9[232134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:27:10 np0005541603 python3.9[232212]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:11 np0005541603 python3.9[232364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:27:11 np0005541603 python3.9[232442]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7nvuzup7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:13 np0005541603 python3.9[232594]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:27:13 np0005541603 podman[232644]: 2025-12-01 22:27:13.673276869 +0000 UTC m=+0.188041207 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  1 17:27:13 np0005541603 python3.9[232688]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:14 np0005541603 podman[232819]: 2025-12-01 22:27:14.727067322 +0000 UTC m=+0.110350561 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:27:14 np0005541603 python3.9[232867]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 17:27:16 np0005541603 podman[232992]: 2025-12-01 22:27:16.054481527 +0000 UTC m=+0.141001385 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 17:27:16 np0005541603 python3[233038]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 17:27:16 np0005541603 podman[233115]: 2025-12-01 22:27:16.856023536 +0000 UTC m=+0.130098970 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350)
Dec  1 17:27:17 np0005541603 python3.9[233209]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 17:27:18 np0005541603 python3.9[233287]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 17:27:18 np0005541603 podman[233385]: 2025-12-01 22:27:18.843558174 +0000 UTC m=+0.124485629 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 17:27:18 np0005541603 podman[233388]: 2025-12-01 22:27:18.852352477 +0000 UTC m=+0.134308861 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=kepler, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public)
Dec  1 17:27:19 np0005541603 python3.9[233480]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 22:27:54 compute-0 python3.9[236988]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 22:27:54 compute-0 systemd[1]: Stopping System Logging Service...
Dec  1 22:27:54 compute-0 rsyslogd[1008]: imjournal: 268 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  1 22:27:54 compute-0 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  1 22:27:54 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  1 22:27:54 compute-0 systemd[1]: Stopped System Logging Service.
Dec  1 22:27:54 compute-0 systemd[1]: rsyslog.service: Consumed 5.651s CPU time, 8.1M memory peak, read 0B from disk, written 7.0M to disk.
Dec  1 22:27:54 compute-0 systemd[1]: Starting System Logging Service...
Dec  1 22:27:55 compute-0 rsyslogd[236992]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236992" x-info="https://www.rsyslog.com"] start
Dec  1 22:27:55 compute-0 systemd[1]: Started System Logging Service.
Dec  1 22:27:55 compute-0 rsyslogd[236992]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 22:27:55 compute-0 rsyslogd[236992]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  1 22:27:55 compute-0 rsyslogd[236992]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  1 22:27:55 compute-0 rsyslogd[236992]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  1 22:27:55 compute-0 rsyslogd[236992]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  1 22:27:55 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec  1 22:27:55 compute-0 systemd[1]: session-27.scope: Consumed 12.862s CPU time.
Dec  1 22:27:55 compute-0 systemd-logind[788]: Session 27 logged out. Waiting for processes to exit.
Dec  1 22:27:55 compute-0 systemd-logind[788]: Removed session 27.
Dec  1 22:27:59 compute-0 podman[203693]: time="2025-12-01T22:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:27:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:27:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4253 "" "Go-http-client/1.1"
Dec  1 22:28:01 compute-0 openstack_network_exporter[205887]: ERROR   22:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:28:01 compute-0 openstack_network_exporter[205887]: ERROR   22:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:28:01 compute-0 openstack_network_exporter[205887]: ERROR   22:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:28:01 compute-0 openstack_network_exporter[205887]: ERROR   22:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:28:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:28:01 compute-0 openstack_network_exporter[205887]: ERROR   22:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:28:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:28:03 compute-0 podman[237021]: 2025-12-01 22:28:03.846985091 +0000 UTC m=+0.122374518 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:28:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:28:04.599 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:28:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:28:04.600 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:28:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:28:04.600 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:28:05 compute-0 podman[237045]: 2025-12-01 22:28:05.862873635 +0000 UTC m=+0.130878843 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:28:08 compute-0 podman[237066]: 2025-12-01 22:28:08.877803751 +0000 UTC m=+0.142795516 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true)
Dec  1 22:28:14 compute-0 podman[237085]: 2025-12-01 22:28:14.931229353 +0000 UTC m=+0.214217224 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 22:28:15 compute-0 podman[237111]: 2025-12-01 22:28:15.848142836 +0000 UTC m=+0.117643041 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:28:17 compute-0 podman[237130]: 2025-12-01 22:28:17.827959473 +0000 UTC m=+0.094404122 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 22:28:17 compute-0 podman[237131]: 2025-12-01 22:28:17.847801334 +0000 UTC m=+0.106746357 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec  1 22:28:20 compute-0 podman[237173]: 2025-12-01 22:28:20.268982982 +0000 UTC m=+0.120273083 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 22:28:20 compute-0 podman[237174]: 2025-12-01 22:28:20.303209453 +0000 UTC m=+0.151708883 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, architecture=x86_64, release-0.7.12=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9)
Dec  1 22:28:24 compute-0 nova_compute[189508]: 2025-12-01 22:28:24.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:24 compute-0 nova_compute[189508]: 2025-12-01 22:28:24.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:24 compute-0 nova_compute[189508]: 2025-12-01 22:28:24.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.196 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.197 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.702 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.702 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.703 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.718 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.718 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.719 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:25 compute-0 nova_compute[189508]: 2025-12-01 22:28:25.719 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.245 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.246 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.247 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.248 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.741 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.742 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5717MB free_disk=72.25555419921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.743 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.743 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.832 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.833 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.876 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.903 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.906 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:28:27 compute-0 nova_compute[189508]: 2025-12-01 22:28:27.907 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:28:29 compute-0 podman[203693]: time="2025-12-01T22:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:28:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:28:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4264 "" "Go-http-client/1.1"
Dec  1 22:28:31 compute-0 openstack_network_exporter[205887]: ERROR   22:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:28:31 compute-0 openstack_network_exporter[205887]: ERROR   22:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:28:31 compute-0 openstack_network_exporter[205887]: ERROR   22:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:28:31 compute-0 openstack_network_exporter[205887]: ERROR   22:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:28:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:28:31 compute-0 openstack_network_exporter[205887]: ERROR   22:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:28:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:28:32 compute-0 systemd-logind[788]: New session 28 of user zuul.
Dec  1 22:28:32 compute-0 systemd[1]: Started Session 28 of User zuul.
Dec  1 22:28:34 compute-0 podman[237370]: 2025-12-01 22:28:34.089222169 +0000 UTC m=+0.113757164 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:28:34 compute-0 python3[237412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 22:28:36 compute-0 podman[237615]: 2025-12-01 22:28:36.749932412 +0000 UTC m=+0.177437858 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 22:28:36 compute-0 python3[237661]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:28:38 compute-0 python3[237815]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:28:39 compute-0 podman[237827]: 2025-12-01 22:28:39.808232182 +0000 UTC m=+0.086157885 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:28:41 compute-0 python3[237989]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 22:28:42 compute-0 python3[238142]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 22:28:45 compute-0 python3[238367]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:28:45 compute-0 podman[238406]: 2025-12-01 22:28:45.908001057 +0000 UTC m=+0.171646750 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 22:28:46 compute-0 podman[238482]: 2025-12-01 22:28:46.042067618 +0000 UTC m=+0.106605027 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 22:28:46 compute-0 python3[238574]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:28:48 compute-0 podman[238613]: 2025-12-01 22:28:48.859406344 +0000 UTC m=+0.129835179 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec  1 22:28:48 compute-0 podman[238614]: 2025-12-01 22:28:48.883174612 +0000 UTC m=+0.153156744 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec  1 22:28:50 compute-0 podman[238651]: 2025-12-01 22:28:50.840887994 +0000 UTC m=+0.110419267 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, release=1214.1726694543, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  1 22:28:50 compute-0 podman[238650]: 2025-12-01 22:28:50.858465003 +0000 UTC m=+0.138881201 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:28:59 compute-0 podman[203693]: time="2025-12-01T22:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:28:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:28:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4261 "" "Go-http-client/1.1"
Dec  1 22:29:01 compute-0 openstack_network_exporter[205887]: ERROR   22:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:29:01 compute-0 openstack_network_exporter[205887]: ERROR   22:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:29:01 compute-0 openstack_network_exporter[205887]: ERROR   22:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:29:01 compute-0 openstack_network_exporter[205887]: ERROR   22:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:29:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:29:01 compute-0 openstack_network_exporter[205887]: ERROR   22:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:29:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:29:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:29:04.600 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:29:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:29:04.601 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:29:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:29:04.601 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:29:04 compute-0 podman[238690]: 2025-12-01 22:29:04.8400157 +0000 UTC m=+0.117830614 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:29:07 compute-0 podman[238712]: 2025-12-01 22:29:07.803059655 +0000 UTC m=+0.081092549 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  1 22:29:10 compute-0 podman[238731]: 2025-12-01 22:29:10.917523903 +0000 UTC m=+0.181265408 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 22:29:16 compute-0 podman[238753]: 2025-12-01 22:29:16.44497847 +0000 UTC m=+0.088491952 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:29:16 compute-0 podman[238752]: 2025-12-01 22:29:16.490097517 +0000 UTC m=+0.149679554 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  1 22:29:19 compute-0 podman[238796]: 2025-12-01 22:29:19.802573226 +0000 UTC m=+0.083456886 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:29:19 compute-0 podman[238797]: 2025-12-01 22:29:19.80583494 +0000 UTC m=+0.077495653 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 22:29:21 compute-0 podman[238832]: 2025-12-01 22:29:21.818477593 +0000 UTC m=+0.098477262 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:29:21 compute-0 podman[238833]: 2025-12-01 22:29:21.857202904 +0000 UTC m=+0.133026192 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, vendor=Red Hat, Inc., release-0.7.12=)
Dec  1 22:29:24 compute-0 nova_compute[189508]: 2025-12-01 22:29:24.906 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:24 compute-0 nova_compute[189508]: 2025-12-01 22:29:24.907 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:24 compute-0 nova_compute[189508]: 2025-12-01 22:29:24.907 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:29:25 compute-0 nova_compute[189508]: 2025-12-01 22:29:25.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:25 compute-0 nova_compute[189508]: 2025-12-01 22:29:25.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:25 compute-0 nova_compute[189508]: 2025-12-01 22:29:25.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:29:25 compute-0 nova_compute[189508]: 2025-12-01 22:29:25.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:29:25 compute-0 nova_compute[189508]: 2025-12-01 22:29:25.225 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:29:25 compute-0 nova_compute[189508]: 2025-12-01 22:29:25.226 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:25 compute-0 nova_compute[189508]: 2025-12-01 22:29:25.227 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:26 compute-0 nova_compute[189508]: 2025-12-01 22:29:26.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.240 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.241 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.241 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.793 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.795 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5704MB free_disk=72.25574493408203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.795 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.795 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.865 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.865 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.896 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.910 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.912 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:29:28 compute-0 nova_compute[189508]: 2025-12-01 22:29:28.913 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:29:29 compute-0 podman[203693]: time="2025-12-01T22:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:29:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:29:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4268 "" "Go-http-client/1.1"
Dec  1 22:29:29 compute-0 nova_compute[189508]: 2025-12-01 22:29:29.914 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:29:31 compute-0 openstack_network_exporter[205887]: ERROR   22:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:29:31 compute-0 openstack_network_exporter[205887]: ERROR   22:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:29:31 compute-0 openstack_network_exporter[205887]: ERROR   22:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:29:31 compute-0 openstack_network_exporter[205887]: ERROR   22:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:29:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:29:31 compute-0 openstack_network_exporter[205887]: ERROR   22:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:29:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.262 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.263 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.264 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.266 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.267 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.275 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:29:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:29:35 compute-0 podman[238873]: 2025-12-01 22:29:35.81421999 +0000 UTC m=+0.086239958 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:29:38 compute-0 podman[238897]: 2025-12-01 22:29:38.825684056 +0000 UTC m=+0.104502006 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec  1 22:29:41 compute-0 podman[238915]: 2025-12-01 22:29:41.854752182 +0000 UTC m=+0.124962049 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 22:29:46 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec  1 22:29:46 compute-0 systemd[1]: session-28.scope: Consumed 11.632s CPU time.
Dec  1 22:29:46 compute-0 systemd-logind[788]: Session 28 logged out. Waiting for processes to exit.
Dec  1 22:29:46 compute-0 systemd-logind[788]: Removed session 28.
Dec  1 22:29:46 compute-0 podman[238938]: 2025-12-01 22:29:46.604204529 +0000 UTC m=+0.107403090 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 22:29:46 compute-0 podman[238958]: 2025-12-01 22:29:46.804187918 +0000 UTC m=+0.158535530 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  1 22:29:50 compute-0 podman[238986]: 2025-12-01 22:29:50.848955365 +0000 UTC m=+0.119208812 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:29:50 compute-0 podman[238987]: 2025-12-01 22:29:50.882266279 +0000 UTC m=+0.145191314 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 22:29:52 compute-0 podman[239025]: 2025-12-01 22:29:52.808222731 +0000 UTC m=+0.085338252 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:29:52 compute-0 podman[239026]: 2025-12-01 22:29:52.881267435 +0000 UTC m=+0.142497335 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_id=edpm, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git)
Dec  1 22:29:59 compute-0 podman[203693]: time="2025-12-01T22:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:29:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:29:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4272 "" "Go-http-client/1.1"
Dec  1 22:30:01 compute-0 openstack_network_exporter[205887]: ERROR   22:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:30:01 compute-0 openstack_network_exporter[205887]: ERROR   22:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:30:01 compute-0 openstack_network_exporter[205887]: ERROR   22:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:30:01 compute-0 openstack_network_exporter[205887]: ERROR   22:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:30:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:30:01 compute-0 openstack_network_exporter[205887]: ERROR   22:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:30:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:30:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:30:04.603 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:30:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:30:04.604 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:30:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:30:04.604 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:30:06 compute-0 podman[239071]: 2025-12-01 22:30:06.854195422 +0000 UTC m=+0.118345217 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:30:09 compute-0 podman[239094]: 2025-12-01 22:30:09.888749777 +0000 UTC m=+0.153739332 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 22:30:12 compute-0 podman[239113]: 2025-12-01 22:30:12.874249962 +0000 UTC m=+0.145546414 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:30:16 compute-0 podman[239133]: 2025-12-01 22:30:16.835252315 +0000 UTC m=+0.109411568 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 22:30:17 compute-0 podman[239151]: 2025-12-01 22:30:17.051637639 +0000 UTC m=+0.176980454 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:30:21 compute-0 podman[239177]: 2025-12-01 22:30:21.884913563 +0000 UTC m=+0.150260311 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:30:21 compute-0 podman[239178]: 2025-12-01 22:30:21.885922162 +0000 UTC m=+0.144983698 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, version=9.6, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 22:30:23 compute-0 podman[239216]: 2025-12-01 22:30:23.80653979 +0000 UTC m=+0.085200487 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:30:23 compute-0 podman[239217]: 2025-12-01 22:30:23.902110326 +0000 UTC m=+0.163739400 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, managed_by=edpm_ansible)
Dec  1 22:30:25 compute-0 nova_compute[189508]: 2025-12-01 22:30:25.196 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:25 compute-0 nova_compute[189508]: 2025-12-01 22:30:25.196 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:25 compute-0 nova_compute[189508]: 2025-12-01 22:30:25.225 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:26 compute-0 nova_compute[189508]: 2025-12-01 22:30:26.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:26 compute-0 nova_compute[189508]: 2025-12-01 22:30:26.203 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:30:26 compute-0 nova_compute[189508]: 2025-12-01 22:30:26.204 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:30:26 compute-0 nova_compute[189508]: 2025-12-01 22:30:26.226 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:30:26 compute-0 nova_compute[189508]: 2025-12-01 22:30:26.227 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:26 compute-0 nova_compute[189508]: 2025-12-01 22:30:26.228 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:26 compute-0 nova_compute[189508]: 2025-12-01 22:30:26.229 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:30:27 compute-0 nova_compute[189508]: 2025-12-01 22:30:27.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:28 compute-0 nova_compute[189508]: 2025-12-01 22:30:28.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:29 compute-0 nova_compute[189508]: 2025-12-01 22:30:29.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:29 compute-0 podman[203693]: time="2025-12-01T22:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:30:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:30:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4272 "" "Go-http-client/1.1"
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.244 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.245 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.246 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.247 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.772 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.774 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5714MB free_disk=72.2557258605957GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.775 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.775 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.864 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.865 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.896 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.916 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.919 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:30:30 compute-0 nova_compute[189508]: 2025-12-01 22:30:30.920 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:30:31 compute-0 openstack_network_exporter[205887]: ERROR   22:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:30:31 compute-0 openstack_network_exporter[205887]: ERROR   22:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:30:31 compute-0 openstack_network_exporter[205887]: ERROR   22:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:30:31 compute-0 openstack_network_exporter[205887]: ERROR   22:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:30:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:30:31 compute-0 openstack_network_exporter[205887]: ERROR   22:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:30:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:30:37 compute-0 podman[239257]: 2025-12-01 22:30:37.813064178 +0000 UTC m=+0.088031973 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 22:30:40 compute-0 podman[239281]: 2025-12-01 22:30:40.8573946 +0000 UTC m=+0.125698342 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:30:43 compute-0 podman[239302]: 2025-12-01 22:30:43.833106058 +0000 UTC m=+0.108680065 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:30:47 compute-0 podman[239323]: 2025-12-01 22:30:47.868967922 +0000 UTC m=+0.128603484 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:30:47 compute-0 podman[239322]: 2025-12-01 22:30:47.904474269 +0000 UTC m=+0.170931087 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:30:52 compute-0 podman[239365]: 2025-12-01 22:30:52.833238751 +0000 UTC m=+0.103130945 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 22:30:52 compute-0 podman[239364]: 2025-12-01 22:30:52.842164897 +0000 UTC m=+0.119153474 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  1 22:30:54 compute-0 podman[239406]: 2025-12-01 22:30:54.850420671 +0000 UTC m=+0.113490911 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=)
Dec  1 22:30:54 compute-0 podman[239405]: 2025-12-01 22:30:54.86083728 +0000 UTC m=+0.123294503 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:30:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:30:56.114 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:30:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:30:56.115 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:30:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:30:56.115 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:30:59 compute-0 podman[203693]: time="2025-12-01T22:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:30:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:30:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4272 "" "Go-http-client/1.1"
Dec  1 22:31:01 compute-0 openstack_network_exporter[205887]: ERROR   22:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:31:01 compute-0 openstack_network_exporter[205887]: ERROR   22:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:31:01 compute-0 openstack_network_exporter[205887]: ERROR   22:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:31:01 compute-0 openstack_network_exporter[205887]: ERROR   22:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:31:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:31:01 compute-0 openstack_network_exporter[205887]: ERROR   22:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:31:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:31:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:31:04.604 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:31:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:31:04.606 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:31:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:31:04.606 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:31:08 compute-0 podman[239447]: 2025-12-01 22:31:08.826823748 +0000 UTC m=+0.104514165 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:31:11 compute-0 podman[239471]: 2025-12-01 22:31:11.861789982 +0000 UTC m=+0.128226034 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:31:14 compute-0 podman[239491]: 2025-12-01 22:31:14.812477753 +0000 UTC m=+0.110730392 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 22:31:18 compute-0 podman[239511]: 2025-12-01 22:31:18.844767986 +0000 UTC m=+0.115076747 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:31:18 compute-0 podman[239510]: 2025-12-01 22:31:18.94194688 +0000 UTC m=+0.207440753 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 22:31:23 compute-0 podman[239555]: 2025-12-01 22:31:23.868659794 +0000 UTC m=+0.130624013 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec  1 22:31:23 compute-0 podman[239554]: 2025-12-01 22:31:23.902101982 +0000 UTC m=+0.171727581 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 22:31:25 compute-0 podman[239593]: 2025-12-01 22:31:25.826765862 +0000 UTC m=+0.102029193 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:31:25 compute-0 podman[239594]: 2025-12-01 22:31:25.864506493 +0000 UTC m=+0.125516066 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, io.openshift.expose-services=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  1 22:31:26 compute-0 nova_compute[189508]: 2025-12-01 22:31:26.918 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:26 compute-0 nova_compute[189508]: 2025-12-01 22:31:26.923 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:26 compute-0 nova_compute[189508]: 2025-12-01 22:31:26.923 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:31:26 compute-0 nova_compute[189508]: 2025-12-01 22:31:26.924 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:31:26 compute-0 nova_compute[189508]: 2025-12-01 22:31:26.960 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:31:27 compute-0 nova_compute[189508]: 2025-12-01 22:31:27.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:27 compute-0 nova_compute[189508]: 2025-12-01 22:31:27.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:27 compute-0 nova_compute[189508]: 2025-12-01 22:31:27.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:31:28 compute-0 nova_compute[189508]: 2025-12-01 22:31:28.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:28 compute-0 nova_compute[189508]: 2025-12-01 22:31:28.202 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:29 compute-0 nova_compute[189508]: 2025-12-01 22:31:29.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:29 compute-0 podman[203693]: time="2025-12-01T22:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:31:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:31:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4271 "" "Go-http-client/1.1"
Dec  1 22:31:30 compute-0 nova_compute[189508]: 2025-12-01 22:31:30.204 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:31 compute-0 openstack_network_exporter[205887]: ERROR   22:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:31:31 compute-0 openstack_network_exporter[205887]: ERROR   22:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:31:31 compute-0 openstack_network_exporter[205887]: ERROR   22:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:31:31 compute-0 openstack_network_exporter[205887]: ERROR   22:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:31:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:31:31 compute-0 openstack_network_exporter[205887]: ERROR   22:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:31:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.304 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.305 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.306 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.306 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.850 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.852 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5704MB free_disk=72.25581741333008GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.853 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.854 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.946 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.947 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:31:32 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.976 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:31:33 compute-0 nova_compute[189508]: 2025-12-01 22:31:32.999 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:31:33 compute-0 nova_compute[189508]: 2025-12-01 22:31:33.002 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:31:33 compute-0 nova_compute[189508]: 2025-12-01 22:31:33.003 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.263 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.263 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.263 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.270 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.273 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.274 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.274 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.275 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.275 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.276 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.277 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:31:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:31:39 compute-0 podman[239635]: 2025-12-01 22:31:39.844361607 +0000 UTC m=+0.112968527 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:31:42 compute-0 podman[239659]: 2025-12-01 22:31:42.879060004 +0000 UTC m=+0.156501554 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:31:45 compute-0 podman[239679]: 2025-12-01 22:31:45.826403759 +0000 UTC m=+0.103514126 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 22:31:49 compute-0 podman[239700]: 2025-12-01 22:31:49.84361098 +0000 UTC m=+0.114071429 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 22:31:49 compute-0 podman[239699]: 2025-12-01 22:31:49.897563285 +0000 UTC m=+0.181115009 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  1 22:31:50 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:31:50.660 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:31:50 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:31:50.662 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:31:54 compute-0 podman[239744]: 2025-12-01 22:31:54.836985331 +0000 UTC m=+0.100982734 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 22:31:54 compute-0 podman[239743]: 2025-12-01 22:31:54.850736175 +0000 UTC m=+0.116512439 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:31:55 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:31:55.666 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:31:56 compute-0 podman[239783]: 2025-12-01 22:31:56.855037448 +0000 UTC m=+0.121278445 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, release=1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 22:31:56 compute-0 podman[239782]: 2025-12-01 22:31:56.875162104 +0000 UTC m=+0.146925069 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:31:59 compute-0 podman[203693]: time="2025-12-01T22:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:31:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:31:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
Dec  1 22:32:01 compute-0 openstack_network_exporter[205887]: ERROR   22:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:32:01 compute-0 openstack_network_exporter[205887]: ERROR   22:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:32:01 compute-0 openstack_network_exporter[205887]: ERROR   22:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:32:01 compute-0 openstack_network_exporter[205887]: ERROR   22:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:32:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:32:01 compute-0 openstack_network_exporter[205887]: ERROR   22:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:32:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.042 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.043 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.074 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.240 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.241 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.253 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.254 189512 INFO nova.compute.claims [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.373 189512 DEBUG nova.compute.provider_tree [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.393 189512 DEBUG nova.scheduler.client.report [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.415 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.416 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.468 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.469 189512 DEBUG nova.network.neutron [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.498 189512 INFO nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.536 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.644 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.646 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.647 189512 INFO nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Creating image(s)#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.648 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.649 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.651 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.652 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:02 compute-0 nova_compute[189508]: 2025-12-01 22:32:02.653 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:03 compute-0 nova_compute[189508]: 2025-12-01 22:32:03.861 189512 WARNING oslo_policy.policy [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 22:32:03 compute-0 nova_compute[189508]: 2025-12-01 22:32:03.862 189512 WARNING oslo_policy.policy [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.103 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.208 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.210 189512 DEBUG nova.virt.images [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] ca09b2c0-a624-4fb0-b624-b8d92d761f4a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.211 189512 DEBUG nova.privsep.utils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.212 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.part /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.411 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.part /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.converted" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.415 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.494 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e.converted --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.496 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:04 compute-0 nova_compute[189508]: 2025-12-01 22:32:04.521 189512 INFO oslo.privsep.daemon [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp5qn_91iv/privsep.sock']#033[00m
Dec  1 22:32:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:04.605 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:04.606 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:04.607 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.249 189512 INFO oslo.privsep.daemon [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.110 239842 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.114 239842 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.116 239842 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.116 239842 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239842#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.357 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.450 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.453 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.455 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.483 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.566 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.568 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.623 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.625 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.626 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.699 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.701 189512 DEBUG nova.virt.disk.api [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Checking if we can resize image /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.702 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.725 189512 DEBUG nova.network.neutron [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Successfully created port: 64f1c8ea-4ab7-4266-8a8c-466433068355 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.771 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.772 189512 DEBUG nova.virt.disk.api [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Cannot resize image /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.773 189512 DEBUG nova.objects.instance [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'migration_context' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.797 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.798 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.800 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.801 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.803 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.804 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.837 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.838 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.901 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.903 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:05 compute-0 nova_compute[189508]: 2025-12-01 22:32:05.928 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.021 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.023 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.025 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.049 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.123 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.124 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.192 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 1073741824" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.194 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.196 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.280 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.282 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.283 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Ensure instance console log exists: /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.284 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.285 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:06 compute-0 nova_compute[189508]: 2025-12-01 22:32:06.285 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.148 189512 DEBUG nova.network.neutron [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Successfully updated port: 64f1c8ea-4ab7-4266-8a8c-466433068355 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.163 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.164 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.165 189512 DEBUG nova.network.neutron [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.721 189512 DEBUG nova.network.neutron [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.763 189512 DEBUG nova.compute.manager [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-changed-64f1c8ea-4ab7-4266-8a8c-466433068355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.764 189512 DEBUG nova.compute.manager [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Refreshing instance network info cache due to event network-changed-64f1c8ea-4ab7-4266-8a8c-466433068355. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:32:07 compute-0 nova_compute[189508]: 2025-12-01 22:32:07.764 189512 DEBUG oslo_concurrency.lockutils [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.068 189512 DEBUG nova.network.neutron [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.107 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.108 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Instance network_info: |[{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.110 189512 DEBUG oslo_concurrency.lockutils [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.111 189512 DEBUG nova.network.neutron [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Refreshing network info cache for port 64f1c8ea-4ab7-4266-8a8c-466433068355 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.123 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Start _get_guest_xml network_info=[{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}], 'ephemerals': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 1, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.143 189512 WARNING nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.160 189512 DEBUG nova.virt.libvirt.host [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.162 189512 DEBUG nova.virt.libvirt.host [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.170 189512 DEBUG nova.virt.libvirt.host [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.172 189512 DEBUG nova.virt.libvirt.host [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.174 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.175 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:30:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='aa9783c0-34c0-4a4d-bc86-59429edc9395',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.176 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.177 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.178 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.179 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.180 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.181 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.182 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.183 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.184 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.185 189512 DEBUG nova.virt.hardware [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.194 189512 DEBUG nova.privsep.utils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.197 189512 DEBUG nova.virt.libvirt.vif [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:31:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-efoc96je',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:32:02Z,user_data=None,user_id='3b810e864d6c4d058e539f62ad181096',uuid=db72b066-1974-41bb-a917-13b5ba129196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.198 189512 DEBUG nova.network.os_vif_util [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.200 189512 DEBUG nova.network.os_vif_util [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:3f:bd,bridge_name='br-int',has_traffic_filtering=True,id=64f1c8ea-4ab7-4266-8a8c-466433068355,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64f1c8ea-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.202 189512 DEBUG nova.objects.instance [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'pci_devices' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.221 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <uuid>db72b066-1974-41bb-a917-13b5ba129196</uuid>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <name>instance-00000001</name>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <memory>524288</memory>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <nova:name>test_0</nova:name>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:32:09</nova:creationTime>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <nova:flavor name="m1.small">
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:memory>512</nova:memory>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:user uuid="3b810e864d6c4d058e539f62ad181096">admin</nova:user>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:project uuid="af2fbf0e1b5f40c19aed69d241db7727">admin</nova:project>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="ca09b2c0-a624-4fb0-b624-b8d92d761f4a"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        <nova:port uuid="64f1c8ea-4ab7-4266-8a8c-466433068355">
Dec  1 22:32:09 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="192.168.0.177" ipVersion="4"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <system>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <entry name="serial">db72b066-1974-41bb-a917-13b5ba129196</entry>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <entry name="uuid">db72b066-1974-41bb-a917-13b5ba129196</entry>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </system>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <os>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  </os>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <features>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  </features>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <target dev="vdb" bus="virtio"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.config"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:78:3f:bd"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <target dev="tap64f1c8ea-4a"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/console.log" append="off"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <video>
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </video>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:32:09 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:32:09 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:32:09 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:32:09 compute-0 nova_compute[189508]: </domain>
Dec  1 22:32:09 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.223 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Preparing to wait for external event network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.224 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.225 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.226 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.227 189512 DEBUG nova.virt.libvirt.vif [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:31:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-efoc96je',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:32:02Z,user_data=None,user_id='3b810e864d6c4d058e539f62ad181096',uuid=db72b066-1974-41bb-a917-13b5ba129196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.228 189512 DEBUG nova.network.os_vif_util [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.229 189512 DEBUG nova.network.os_vif_util [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:3f:bd,bridge_name='br-int',has_traffic_filtering=True,id=64f1c8ea-4ab7-4266-8a8c-466433068355,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64f1c8ea-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.230 189512 DEBUG os_vif [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:3f:bd,bridge_name='br-int',has_traffic_filtering=True,id=64f1c8ea-4ab7-4266-8a8c-466433068355,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64f1c8ea-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.285 189512 DEBUG ovsdbapp.backend.ovs_idl [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.286 189512 DEBUG ovsdbapp.backend.ovs_idl [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.287 189512 DEBUG ovsdbapp.backend.ovs_idl [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.287 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.288 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.288 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.290 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.292 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.297 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.307 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.308 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.308 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.309 189512 INFO oslo.privsep.daemon [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpl44hb2gp/privsep.sock']#033[00m
Dec  1 22:32:09 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.469 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.127 189512 INFO oslo.privsep.daemon [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.971 239879 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.981 239879 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.986 239879 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:09.987 239879 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239879#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.430 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.431 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64f1c8ea-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.433 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64f1c8ea-4a, col_values=(('external_ids', {'iface-id': '64f1c8ea-4ab7-4266-8a8c-466433068355', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:3f:bd', 'vm-uuid': 'db72b066-1974-41bb-a917-13b5ba129196'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.437 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:10 compute-0 NetworkManager[56278]: <info>  [1764628330.4381] manager: (tap64f1c8ea-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.441 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.448 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.449 189512 INFO os_vif [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:3f:bd,bridge_name='br-int',has_traffic_filtering=True,id=64f1c8ea-4ab7-4266-8a8c-466433068355,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64f1c8ea-4a')#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.471 189512 DEBUG nova.network.neutron [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated VIF entry in instance network info cache for port 64f1c8ea-4ab7-4266-8a8c-466433068355. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.471 189512 DEBUG nova.network.neutron [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.495 189512 DEBUG oslo_concurrency.lockutils [req-9fd53b3b-6784-404a-aae0-2c1dfc14c7df req-7bdad3e0-b443-43c5-b3ef-8c7c3d9b75d0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.551 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.552 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.552 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.553 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No VIF found with MAC fa:16:3e:78:3f:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:32:10 compute-0 nova_compute[189508]: 2025-12-01 22:32:10.554 189512 INFO nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Using config drive#033[00m
Dec  1 22:32:10 compute-0 podman[239885]: 2025-12-01 22:32:10.837761265 +0000 UTC m=+0.101361864 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:32:13 compute-0 nova_compute[189508]: 2025-12-01 22:32:13.743 189512 INFO nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Creating config drive at /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.config#033[00m
Dec  1 22:32:13 compute-0 nova_compute[189508]: 2025-12-01 22:32:13.753 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_fubqf06 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:13 compute-0 podman[239908]: 2025-12-01 22:32:13.841373522 +0000 UTC m=+0.120308427 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 22:32:13 compute-0 nova_compute[189508]: 2025-12-01 22:32:13.898 189512 DEBUG oslo_concurrency.processutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_fubqf06" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:14 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  1 22:32:14 compute-0 kernel: tap64f1c8ea-4a: entered promiscuous mode
Dec  1 22:32:14 compute-0 NetworkManager[56278]: <info>  [1764628334.0395] manager: (tap64f1c8ea-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.039 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:14 compute-0 ovn_controller[97770]: 2025-12-01T22:32:14Z|00027|binding|INFO|Claiming lport 64f1c8ea-4ab7-4266-8a8c-466433068355 for this chassis.
Dec  1 22:32:14 compute-0 ovn_controller[97770]: 2025-12-01T22:32:14Z|00028|binding|INFO|64f1c8ea-4ab7-4266-8a8c-466433068355: Claiming fa:16:3e:78:3f:bd 192.168.0.177
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.057 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:3f:bd 192.168.0.177'], port_security=['fa:16:3e:78:3f:bd 192.168.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.177/24', 'neutron:device_id': 'db72b066-1974-41bb-a917-13b5ba129196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=64f1c8ea-4ab7-4266-8a8c-466433068355) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.060 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 64f1c8ea-4ab7-4266-8a8c-466433068355 in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c bound to our chassis#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.065 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.067 106662 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpy9v2ahqa/privsep.sock']#033[00m
Dec  1 22:32:14 compute-0 systemd-udevd[239951]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:32:14 compute-0 NetworkManager[56278]: <info>  [1764628334.1355] device (tap64f1c8ea-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:32:14 compute-0 NetworkManager[56278]: <info>  [1764628334.1366] device (tap64f1c8ea-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:32:14 compute-0 systemd-machined[155759]: New machine qemu-1-instance-00000001.
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.157 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:14 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  1 22:32:14 compute-0 ovn_controller[97770]: 2025-12-01T22:32:14Z|00029|binding|INFO|Setting lport 64f1c8ea-4ab7-4266-8a8c-466433068355 ovn-installed in OVS
Dec  1 22:32:14 compute-0 ovn_controller[97770]: 2025-12-01T22:32:14Z|00030|binding|INFO|Setting lport 64f1c8ea-4ab7-4266-8a8c-466433068355 up in Southbound
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.173 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.464 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.596 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628334.595267, db72b066-1974-41bb-a917-13b5ba129196 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.596 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] VM Started (Lifecycle Event)#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.665 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.678 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628334.5955126, db72b066-1974-41bb-a917-13b5ba129196 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.679 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.704 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.712 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.740 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.841 106662 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.843 106662 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy9v2ahqa/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.658 239973 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.675 239973 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.681 239973 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.682 239973 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239973#033[00m
Dec  1 22:32:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:14.849 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a55fcbdb-305a-49b4-aeab-77b4ae625756]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.946 189512 DEBUG nova.compute.manager [req-ece21bbd-6878-4ace-b856-e09ba53ae2ba req-97cdc394-2d25-46b2-9346-af61e159d8ef c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.947 189512 DEBUG oslo_concurrency.lockutils [req-ece21bbd-6878-4ace-b856-e09ba53ae2ba req-97cdc394-2d25-46b2-9346-af61e159d8ef c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.947 189512 DEBUG oslo_concurrency.lockutils [req-ece21bbd-6878-4ace-b856-e09ba53ae2ba req-97cdc394-2d25-46b2-9346-af61e159d8ef c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.948 189512 DEBUG oslo_concurrency.lockutils [req-ece21bbd-6878-4ace-b856-e09ba53ae2ba req-97cdc394-2d25-46b2-9346-af61e159d8ef c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.948 189512 DEBUG nova.compute.manager [req-ece21bbd-6878-4ace-b856-e09ba53ae2ba req-97cdc394-2d25-46b2-9346-af61e159d8ef c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Processing event network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.949 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.962 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.965 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628334.9651635, db72b066-1974-41bb-a917-13b5ba129196 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.966 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.975 189512 INFO nova.virt.libvirt.driver [-] [instance: db72b066-1974-41bb-a917-13b5ba129196] Instance spawned successfully.#033[00m
Dec  1 22:32:14 compute-0 nova_compute[189508]: 2025-12-01 22:32:14.976 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.003 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.039 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.046 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.047 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.047 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.047 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.048 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.048 189512 DEBUG nova.virt.libvirt.driver [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.078 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:32:15 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.160 189512 INFO nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Took 12.52 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.163 189512 DEBUG nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:32:15 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.247 189512 INFO nova.compute.manager [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Took 13.06 seconds to build instance.#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.269 189512 DEBUG oslo_concurrency.lockutils [None req-85845c60-cd1d-4a2c-8b41-a38871f52e2c 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.421 239973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.421 239973 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.421 239973 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:15 compute-0 nova_compute[189508]: 2025-12-01 22:32:15.438 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.989 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ac648e36-517b-4405-a11f-4e4969926c1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.991 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdd6e3c27-11 in ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.994 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdd6e3c27-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.994 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2f4c67-bb96-4164-8af4-4ab84e8387e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:15.998 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6620b9-4f0d-401e-8865-a36e7ccc6d2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.032 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[9a61bbe5-6a0f-4831-afd8-9b32a4ebba63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.070 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[07f8a86c-4d08-46cc-b22a-ad2e7351f88f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.074 106662 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp79fwm0z2/privsep.sock']#033[00m
Dec  1 22:32:16 compute-0 podman[240001]: 2025-12-01 22:32:16.250622493 +0000 UTC m=+0.165271705 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.784 106662 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.785 106662 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp79fwm0z2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.609 240026 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.615 240026 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.617 240026 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.618 240026 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240026#033[00m
Dec  1 22:32:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:16.789 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[7a89fe04-b412-47b2-aadc-5e163df4275c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:17 compute-0 nova_compute[189508]: 2025-12-01 22:32:17.110 189512 DEBUG nova.compute.manager [req-53b4dccb-8a82-48cb-8eb6-3b900dde4c7a req-659eb79f-5084-422a-b3b7-18c1164ff830 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:32:17 compute-0 nova_compute[189508]: 2025-12-01 22:32:17.111 189512 DEBUG oslo_concurrency.lockutils [req-53b4dccb-8a82-48cb-8eb6-3b900dde4c7a req-659eb79f-5084-422a-b3b7-18c1164ff830 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:17 compute-0 nova_compute[189508]: 2025-12-01 22:32:17.111 189512 DEBUG oslo_concurrency.lockutils [req-53b4dccb-8a82-48cb-8eb6-3b900dde4c7a req-659eb79f-5084-422a-b3b7-18c1164ff830 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:17 compute-0 nova_compute[189508]: 2025-12-01 22:32:17.113 189512 DEBUG oslo_concurrency.lockutils [req-53b4dccb-8a82-48cb-8eb6-3b900dde4c7a req-659eb79f-5084-422a-b3b7-18c1164ff830 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:17 compute-0 nova_compute[189508]: 2025-12-01 22:32:17.114 189512 DEBUG nova.compute.manager [req-53b4dccb-8a82-48cb-8eb6-3b900dde4c7a req-659eb79f-5084-422a-b3b7-18c1164ff830 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] No waiting events found dispatching network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:32:17 compute-0 nova_compute[189508]: 2025-12-01 22:32:17.115 189512 WARNING nova.compute.manager [req-53b4dccb-8a82-48cb-8eb6-3b900dde4c7a req-659eb79f-5084-422a-b3b7-18c1164ff830 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received unexpected event network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:32:17 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:17.309 240026 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:17 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:17.309 240026 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:17 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:17.309 240026 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:17 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:17.915 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[deda64b9-b2af-44c2-8e80-95e2d0ab613a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:17 compute-0 NetworkManager[56278]: <info>  [1764628337.9474] manager: (tapdd6e3c27-10): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec  1 22:32:17 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:17.946 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[6800b99c-047f-49b4-baa4-0218b6b993c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:17 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:17.993 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cfc820-adf0-43e5-84d1-b84632ff6fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:17.998 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[907b7975-6d81-471f-a604-747a3a98286c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 systemd-udevd[240039]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:32:18 compute-0 NetworkManager[56278]: <info>  [1764628338.0339] device (tapdd6e3c27-10): carrier: link connected
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.046 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[eea08881-d1a0-4f6d-9d79-3ed69c15fe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.090 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e0096584-8689-449e-bc1d-b9aaf7d387b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 21435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240050, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.112 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[175687ef-124f-4a2a-aae4-670666ad3605]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:b108'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384760, 'tstamp': 384760}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240057, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.145 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b75511-89a1-4235-aba9-f03d47f22353]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 21435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240058, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.192 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b542b0e6-2d66-4df3-835a-b3ca35ba5d4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.278 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db347e-c220-4692-ac1d-72914291a451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.281 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.282 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.283 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd6e3c27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:32:18 compute-0 nova_compute[189508]: 2025-12-01 22:32:18.286 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:18 compute-0 kernel: tapdd6e3c27-10: entered promiscuous mode
Dec  1 22:32:18 compute-0 NetworkManager[56278]: <info>  [1764628338.2890] manager: (tapdd6e3c27-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.290 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd6e3c27-10, col_values=(('external_ids', {'iface-id': 'e303b09b-4673-4950-aa2d-91085a5bc5f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:32:18 compute-0 nova_compute[189508]: 2025-12-01 22:32:18.292 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:18 compute-0 nova_compute[189508]: 2025-12-01 22:32:18.293 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.295 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.296 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[362cd527-69fe-48ae-ab9b-c3032386671a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:32:18 compute-0 ovn_controller[97770]: 2025-12-01T22:32:18Z|00031|binding|INFO|Releasing lport e303b09b-4673-4950-aa2d-91085a5bc5f8 from this chassis (sb_readonly=0)
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.299 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c.pid.haproxy
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:32:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:18.301 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'env', 'PROCESS_TAG=haproxy-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:32:18 compute-0 nova_compute[189508]: 2025-12-01 22:32:18.321 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:18 compute-0 podman[240088]: 2025-12-01 22:32:18.875020207 +0000 UTC m=+0.115456888 container create ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 22:32:18 compute-0 podman[240088]: 2025-12-01 22:32:18.808411899 +0000 UTC m=+0.048848610 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:32:18 compute-0 systemd[1]: Started libpod-conmon-ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7.scope.
Dec  1 22:32:18 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272cfdd874201b1817bf0494d025abaa5502e68a4188167a8eaf3d4514d1c75b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:32:19 compute-0 podman[240088]: 2025-12-01 22:32:19.01790469 +0000 UTC m=+0.258341441 container init ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  1 22:32:19 compute-0 podman[240088]: 2025-12-01 22:32:19.033388354 +0000 UTC m=+0.273825045 container start ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 22:32:19 compute-0 neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c[240102]: [NOTICE]   (240106) : New worker (240108) forked
Dec  1 22:32:19 compute-0 neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c[240102]: [NOTICE]   (240106) : Loading success.
Dec  1 22:32:19 compute-0 nova_compute[189508]: 2025-12-01 22:32:19.467 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:20 compute-0 nova_compute[189508]: 2025-12-01 22:32:20.443 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:20 compute-0 podman[240119]: 2025-12-01 22:32:20.835968898 +0000 UTC m=+0.104933667 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:32:20 compute-0 podman[240118]: 2025-12-01 22:32:20.924947277 +0000 UTC m=+0.189585492 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:32:24 compute-0 nova_compute[189508]: 2025-12-01 22:32:24.470 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:25 compute-0 nova_compute[189508]: 2025-12-01 22:32:25.448 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:25 compute-0 podman[240163]: 2025-12-01 22:32:25.846974505 +0000 UTC m=+0.120246535 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 22:32:25 compute-0 podman[240164]: 2025-12-01 22:32:25.889887035 +0000 UTC m=+0.152892391 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Dec  1 22:32:26 compute-0 nova_compute[189508]: 2025-12-01 22:32:26.000 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:26 compute-0 nova_compute[189508]: 2025-12-01 22:32:26.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:26 compute-0 nova_compute[189508]: 2025-12-01 22:32:26.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:27 compute-0 podman[240201]: 2025-12-01 22:32:27.826976471 +0000 UTC m=+0.097968977 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:32:27 compute-0 podman[240202]: 2025-12-01 22:32:27.874128872 +0000 UTC m=+0.138031665 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 22:32:28 compute-0 nova_compute[189508]: 2025-12-01 22:32:28.226 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:28 compute-0 nova_compute[189508]: 2025-12-01 22:32:28.227 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:32:28 compute-0 nova_compute[189508]: 2025-12-01 22:32:28.228 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:32:28 compute-0 nova_compute[189508]: 2025-12-01 22:32:28.686 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:32:28 compute-0 nova_compute[189508]: 2025-12-01 22:32:28.687 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:32:28 compute-0 nova_compute[189508]: 2025-12-01 22:32:28.688 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:32:28 compute-0 nova_compute[189508]: 2025-12-01 22:32:28.689 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:32:29 compute-0 nova_compute[189508]: 2025-12-01 22:32:29.476 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:29 compute-0 podman[203693]: time="2025-12-01T22:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:32:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:32:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4753 "" "Go-http-client/1.1"
Dec  1 22:32:30 compute-0 nova_compute[189508]: 2025-12-01 22:32:30.453 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.252 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:31 compute-0 ovn_controller[97770]: 2025-12-01T22:32:31Z|00032|binding|INFO|Releasing lport e303b09b-4673-4950-aa2d-91085a5bc5f8 from this chassis (sb_readonly=0)
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.2553] manager: (patch-br-int-to-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.2611] device (patch-br-int-to-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.2729] manager: (patch-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.2781] device (patch-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.2889] manager: (patch-br-int-to-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.295 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.2974] manager: (patch-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec  1 22:32:31 compute-0 ovn_controller[97770]: 2025-12-01T22:32:31Z|00033|binding|INFO|Releasing lport e303b09b-4673-4950-aa2d-91085a5bc5f8 from this chassis (sb_readonly=0)
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.3035] device (patch-br-int-to-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.306 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:31 compute-0 NetworkManager[56278]: <info>  [1764628351.3087] device (patch-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 22:32:31 compute-0 openstack_network_exporter[205887]: ERROR   22:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:32:31 compute-0 openstack_network_exporter[205887]: ERROR   22:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:32:31 compute-0 openstack_network_exporter[205887]: ERROR   22:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:32:31 compute-0 openstack_network_exporter[205887]: ERROR   22:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:32:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:32:31 compute-0 openstack_network_exporter[205887]: ERROR   22:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:32:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.488 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.660 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.661 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.663 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.664 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.664 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.666 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.666 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.667 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:32:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:31.859 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:32:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:31.861 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:32:31 compute-0 nova_compute[189508]: 2025-12-01 22:32:31.868 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:32 compute-0 nova_compute[189508]: 2025-12-01 22:32:32.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:32 compute-0 nova_compute[189508]: 2025-12-01 22:32:32.361 189512 DEBUG nova.compute.manager [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-changed-64f1c8ea-4ab7-4266-8a8c-466433068355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:32:32 compute-0 nova_compute[189508]: 2025-12-01 22:32:32.362 189512 DEBUG nova.compute.manager [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Refreshing instance network info cache due to event network-changed-64f1c8ea-4ab7-4266-8a8c-466433068355. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:32:32 compute-0 nova_compute[189508]: 2025-12-01 22:32:32.362 189512 DEBUG oslo_concurrency.lockutils [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:32:32 compute-0 nova_compute[189508]: 2025-12-01 22:32:32.363 189512 DEBUG oslo_concurrency.lockutils [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:32:32 compute-0 nova_compute[189508]: 2025-12-01 22:32:32.364 189512 DEBUG nova.network.neutron [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Refreshing network info cache for port 64f1c8ea-4ab7-4266-8a8c-466433068355 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:32:33 compute-0 nova_compute[189508]: 2025-12-01 22:32:33.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:33 compute-0 nova_compute[189508]: 2025-12-01 22:32:33.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 22:32:33 compute-0 nova_compute[189508]: 2025-12-01 22:32:33.221 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.238 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.240 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.240 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.373 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.469 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.471 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.497 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.566 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.567 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.658 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.659 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.723 189512 DEBUG nova.network.neutron [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated VIF entry in instance network info cache for port 64f1c8ea-4ab7-4266-8a8c-466433068355. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.724 189512 DEBUG nova.network.neutron [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.726 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:32:34 compute-0 nova_compute[189508]: 2025-12-01 22:32:34.751 189512 DEBUG oslo_concurrency.lockutils [req-b93ecf22-c7bc-402f-b78c-1eab10345b61 req-cd4ca315-3637-40b3-8df9-154fd575c242 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:32:34 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:32:34.865 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.172 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.173 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5292MB free_disk=72.22453308105469GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.174 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.174 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.458 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.485 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.486 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.487 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.551 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.633 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.634 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.656 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.695 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.776 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.890 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updated inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.891 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.892 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.978 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.979 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.981 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:32:35 compute-0 nova_compute[189508]: 2025-12-01 22:32:35.981 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 22:32:39 compute-0 nova_compute[189508]: 2025-12-01 22:32:39.481 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:40 compute-0 nova_compute[189508]: 2025-12-01 22:32:40.463 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:41 compute-0 podman[240255]: 2025-12-01 22:32:41.852262158 +0000 UTC m=+0.127303308 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:32:44 compute-0 nova_compute[189508]: 2025-12-01 22:32:44.483 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:44 compute-0 podman[240279]: 2025-12-01 22:32:44.804212821 +0000 UTC m=+0.109983693 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:32:45 compute-0 nova_compute[189508]: 2025-12-01 22:32:45.467 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:46 compute-0 ovn_controller[97770]: 2025-12-01T22:32:46Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:3f:bd 192.168.0.177
Dec  1 22:32:46 compute-0 ovn_controller[97770]: 2025-12-01T22:32:46Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:3f:bd 192.168.0.177
Dec  1 22:32:46 compute-0 podman[240310]: 2025-12-01 22:32:46.856467991 +0000 UTC m=+0.122370253 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 22:32:49 compute-0 nova_compute[189508]: 2025-12-01 22:32:49.487 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:50 compute-0 nova_compute[189508]: 2025-12-01 22:32:50.472 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:51 compute-0 podman[240335]: 2025-12-01 22:32:51.843567993 +0000 UTC m=+0.107729621 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  1 22:32:51 compute-0 podman[240334]: 2025-12-01 22:32:51.921128645 +0000 UTC m=+0.194352242 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 22:32:54 compute-0 nova_compute[189508]: 2025-12-01 22:32:54.492 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:55 compute-0 nova_compute[189508]: 2025-12-01 22:32:55.481 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:56 compute-0 podman[240378]: 2025-12-01 22:32:56.86387246 +0000 UTC m=+0.131853154 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:32:56 compute-0 podman[240379]: 2025-12-01 22:32:56.886100401 +0000 UTC m=+0.146702752 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec  1 22:32:58 compute-0 podman[240419]: 2025-12-01 22:32:58.831269978 +0000 UTC m=+0.108913784 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:32:58 compute-0 podman[240420]: 2025-12-01 22:32:58.831804592 +0000 UTC m=+0.102193449 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git)
Dec  1 22:32:59 compute-0 nova_compute[189508]: 2025-12-01 22:32:59.496 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:32:59 compute-0 podman[203693]: time="2025-12-01T22:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:32:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:32:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4765 "" "Go-http-client/1.1"
Dec  1 22:33:00 compute-0 nova_compute[189508]: 2025-12-01 22:33:00.486 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:01 compute-0 ovn_controller[97770]: 2025-12-01T22:33:01Z|00034|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  1 22:33:01 compute-0 openstack_network_exporter[205887]: ERROR   22:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:33:01 compute-0 openstack_network_exporter[205887]: ERROR   22:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:33:01 compute-0 openstack_network_exporter[205887]: ERROR   22:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:33:01 compute-0 openstack_network_exporter[205887]: ERROR   22:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:33:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:33:01 compute-0 openstack_network_exporter[205887]: ERROR   22:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:33:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:33:04 compute-0 nova_compute[189508]: 2025-12-01 22:33:04.499 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:04.606 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:04.608 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:04.609 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:05 compute-0 nova_compute[189508]: 2025-12-01 22:33:05.490 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:09 compute-0 nova_compute[189508]: 2025-12-01 22:33:09.502 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:10 compute-0 nova_compute[189508]: 2025-12-01 22:33:10.494 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:10 compute-0 nova_compute[189508]: 2025-12-01 22:33:10.828 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:10 compute-0 nova_compute[189508]: 2025-12-01 22:33:10.865 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid db72b066-1974-41bb-a917-13b5ba129196 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 22:33:10 compute-0 nova_compute[189508]: 2025-12-01 22:33:10.868 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:10 compute-0 nova_compute[189508]: 2025-12-01 22:33:10.869 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "db72b066-1974-41bb-a917-13b5ba129196" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:10 compute-0 nova_compute[189508]: 2025-12-01 22:33:10.942 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "db72b066-1974-41bb-a917-13b5ba129196" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:12 compute-0 podman[240463]: 2025-12-01 22:33:12.840693308 +0000 UTC m=+0.110062005 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:33:14 compute-0 nova_compute[189508]: 2025-12-01 22:33:14.505 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:15 compute-0 nova_compute[189508]: 2025-12-01 22:33:15.499 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:15 compute-0 podman[240486]: 2025-12-01 22:33:15.847448345 +0000 UTC m=+0.121673694 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 22:33:17 compute-0 podman[240505]: 2025-12-01 22:33:17.857429186 +0000 UTC m=+0.122106826 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:33:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:19.139 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:33:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:19.141 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:33:19 compute-0 nova_compute[189508]: 2025-12-01 22:33:19.141 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:19 compute-0 nova_compute[189508]: 2025-12-01 22:33:19.507 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:20 compute-0 nova_compute[189508]: 2025-12-01 22:33:20.502 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:22 compute-0 podman[240527]: 2025-12-01 22:33:22.849060855 +0000 UTC m=+0.118910959 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:33:22 compute-0 podman[240526]: 2025-12-01 22:33:22.905634769 +0000 UTC m=+0.179495883 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:33:24 compute-0 nova_compute[189508]: 2025-12-01 22:33:24.509 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:25 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:25.143 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:33:25 compute-0 nova_compute[189508]: 2025-12-01 22:33:25.507 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:25 compute-0 nova_compute[189508]: 2025-12-01 22:33:25.944 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:25 compute-0 nova_compute[189508]: 2025-12-01 22:33:25.945 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:25 compute-0 nova_compute[189508]: 2025-12-01 22:33:25.963 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.072 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.073 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.085 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.085 189512 INFO nova.compute.claims [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.236 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.272 189512 DEBUG nova.compute.provider_tree [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.288 189512 DEBUG nova.scheduler.client.report [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.306 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.308 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.483 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.484 189512 DEBUG nova.network.neutron [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.506 189512 INFO nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.546 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.621 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.623 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.623 189512 INFO nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Creating image(s)#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.625 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.625 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.627 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.646 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.709 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.711 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.712 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.728 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.804 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.806 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.963 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk 1073741824" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.964 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:26 compute-0 nova_compute[189508]: 2025-12-01 22:33:26.965 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.054 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.054 189512 DEBUG nova.virt.disk.api [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Checking if we can resize image /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.055 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.122 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.123 189512 DEBUG nova.virt.disk.api [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Cannot resize image /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.123 189512 DEBUG nova.objects.instance [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'migration_context' on Instance uuid ef18b98f-df89-44d0-9215-5c2e556e10be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.203 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.204 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.205 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.229 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.291 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.292 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.293 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.305 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.397 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.399 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.460 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.462 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.464 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.565 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.568 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.569 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Ensure instance console log exists: /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.570 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.571 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:27 compute-0 nova_compute[189508]: 2025-12-01 22:33:27.571 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:27 compute-0 podman[240598]: 2025-12-01 22:33:27.834248813 +0000 UTC m=+0.103308750 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:33:27 compute-0 podman[240599]: 2025-12-01 22:33:27.861633166 +0000 UTC m=+0.128847992 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, version=9.6, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec  1 22:33:28 compute-0 nova_compute[189508]: 2025-12-01 22:33:28.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:29 compute-0 nova_compute[189508]: 2025-12-01 22:33:29.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:29 compute-0 nova_compute[189508]: 2025-12-01 22:33:29.512 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:29 compute-0 podman[203693]: time="2025-12-01T22:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:33:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:33:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  1 22:33:29 compute-0 podman[240637]: 2025-12-01 22:33:29.82672398 +0000 UTC m=+0.100469631 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:33:29 compute-0 podman[240638]: 2025-12-01 22:33:29.873595798 +0000 UTC m=+0.139361150 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.232 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.511 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.753 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.754 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.755 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:33:30 compute-0 nova_compute[189508]: 2025-12-01 22:33:30.756 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:33:31 compute-0 openstack_network_exporter[205887]: ERROR   22:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:33:31 compute-0 openstack_network_exporter[205887]: ERROR   22:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:33:31 compute-0 openstack_network_exporter[205887]: ERROR   22:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:33:31 compute-0 openstack_network_exporter[205887]: ERROR   22:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:33:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:33:31 compute-0 openstack_network_exporter[205887]: ERROR   22:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:33:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:33:31 compute-0 nova_compute[189508]: 2025-12-01 22:33:31.839 189512 DEBUG nova.network.neutron [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Successfully updated port: 112b3e51-47c2-499f-9108-af9d45576c1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:33:31 compute-0 nova_compute[189508]: 2025-12-01 22:33:31.857 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:33:31 compute-0 nova_compute[189508]: 2025-12-01 22:33:31.859 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquired lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:33:31 compute-0 nova_compute[189508]: 2025-12-01 22:33:31.860 189512 DEBUG nova.network.neutron [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:33:31 compute-0 nova_compute[189508]: 2025-12-01 22:33:31.982 189512 DEBUG nova.compute.manager [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received event network-changed-112b3e51-47c2-499f-9108-af9d45576c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:33:31 compute-0 nova_compute[189508]: 2025-12-01 22:33:31.983 189512 DEBUG nova.compute.manager [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Refreshing instance network info cache due to event network-changed-112b3e51-47c2-499f-9108-af9d45576c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:33:31 compute-0 nova_compute[189508]: 2025-12-01 22:33:31.984 189512 DEBUG oslo_concurrency.lockutils [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:33:32 compute-0 nova_compute[189508]: 2025-12-01 22:33:32.768 189512 DEBUG nova.network.neutron [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:33:34 compute-0 nova_compute[189508]: 2025-12-01 22:33:34.516 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.264 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.265 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.273 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance db72b066-1974-41bb-a917-13b5ba129196 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1d850d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.515 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:35.665 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/db72b066-1974-41bb-a917-13b5ba129196 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.685 189512 DEBUG nova.network.neutron [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.700 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Releasing lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.701 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Instance network_info: |[{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.702 189512 DEBUG oslo_concurrency.lockutils [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.702 189512 DEBUG nova.network.neutron [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Refreshing network info cache for port 112b3e51-47c2-499f-9108-af9d45576c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.706 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Start _get_guest_xml network_info=[{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}], 'ephemerals': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 1, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.716 189512 WARNING nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.728 189512 DEBUG nova.virt.libvirt.host [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.729 189512 DEBUG nova.virt.libvirt.host [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.734 189512 DEBUG nova.virt.libvirt.host [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.735 189512 DEBUG nova.virt.libvirt.host [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.736 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.736 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:30:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='aa9783c0-34c0-4a4d-bc86-59429edc9395',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.737 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.737 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.737 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.738 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.738 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.738 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.739 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.739 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.739 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.739 189512 DEBUG nova.virt.hardware [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.745 189512 DEBUG nova.virt.libvirt.vif [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv',id=2,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-gbn10oql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:33:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODc3NjIxMjcyMTU2NTcwMDQ4MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 22:33:35 compute-0 nova_compute[189508]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODc3NjIxMjcyMTU2NTcwMDQ4MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=ef18b98f-df89-44d0-9215-5c2e556e10be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.745 189512 DEBUG nova.network.os_vif_util [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.746 189512 DEBUG nova.network.os_vif_util [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:04:8b,bridge_name='br-int',has_traffic_filtering=True,id=112b3e51-47c2-499f-9108-af9d45576c1e,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap112b3e51-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.748 189512 DEBUG nova.objects.instance [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef18b98f-df89-44d0-9215-5c2e556e10be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.767 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <uuid>ef18b98f-df89-44d0-9215-5c2e556e10be</uuid>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <name>instance-00000002</name>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <memory>524288</memory>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <nova:name>vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv</nova:name>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:33:35</nova:creationTime>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <nova:flavor name="m1.small">
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:memory>512</nova:memory>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:user uuid="3b810e864d6c4d058e539f62ad181096">admin</nova:user>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:project uuid="af2fbf0e1b5f40c19aed69d241db7727">admin</nova:project>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="ca09b2c0-a624-4fb0-b624-b8d92d761f4a"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        <nova:port uuid="112b3e51-47c2-499f-9108-af9d45576c1e">
Dec  1 22:33:35 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="192.168.0.23" ipVersion="4"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <system>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <entry name="serial">ef18b98f-df89-44d0-9215-5c2e556e10be</entry>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <entry name="uuid">ef18b98f-df89-44d0-9215-5c2e556e10be</entry>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </system>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <os>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  </os>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <features>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  </features>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <target dev="vdb" bus="virtio"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.config"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:96:04:8b"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <target dev="tap112b3e51-47"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/console.log" append="off"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <video>
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </video>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:33:35 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:33:35 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:33:35 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:33:35 compute-0 nova_compute[189508]: </domain>
Dec  1 22:33:35 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.769 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Preparing to wait for external event network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.769 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.769 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.769 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.770 189512 DEBUG nova.virt.libvirt.vif [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv',id=2,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-gbn10oql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:33:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODc3NjIxMjcyMTU2NTcwMDQ4MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 22:33:35 compute-0 nova_compute[189508]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODc3NjIxMjcyMTU2NTcwMDQ4MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=ef18b98f-df89-44d0-9215-5c2e556e10be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.770 189512 DEBUG nova.network.os_vif_util [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.771 189512 DEBUG nova.network.os_vif_util [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:04:8b,bridge_name='br-int',has_traffic_filtering=True,id=112b3e51-47c2-499f-9108-af9d45576c1e,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap112b3e51-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.771 189512 DEBUG os_vif [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:04:8b,bridge_name='br-int',has_traffic_filtering=True,id=112b3e51-47c2-499f-9108-af9d45576c1e,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap112b3e51-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.772 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.772 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.773 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.777 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.777 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap112b3e51-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.778 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap112b3e51-47, col_values=(('external_ids', {'iface-id': '112b3e51-47c2-499f-9108-af9d45576c1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:04:8b', 'vm-uuid': 'ef18b98f-df89-44d0-9215-5c2e556e10be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.779 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.781 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:33:35 compute-0 NetworkManager[56278]: <info>  [1764628415.7845] manager: (tap112b3e51-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.789 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.790 189512 INFO os_vif [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:04:8b,bridge_name='br-int',has_traffic_filtering=True,id=112b3e51-47c2-499f-9108-af9d45576c1e,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap112b3e51-47')#033[00m
Dec  1 22:33:35 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:33:35.745 189512 DEBUG nova.virt.libvirt.vif [None req-52de3a40-d5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:33:35 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:33:35.770 189512 DEBUG nova.virt.libvirt.vif [None req-52de3a40-d5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:33:35 compute-0 nova_compute[189508]: 2025-12-01 22:33:35.877 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.026 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.027 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.027 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.028 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.028 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.028 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.029 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.029 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.046 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.047 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.047 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.047 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No VIF found with MAC fa:16:3e:96:04:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.048 189512 INFO nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Using config drive#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.059 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.059 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.060 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.060 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.182 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.278 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.280 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.392 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.394 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.470 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.473 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.542 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.555 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.559 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Mon, 01 Dec 2025 22:33:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c63c666d-8841-474b-9f37-33850a4d9307 x-openstack-request-id: req-c63c666d-8841-474b-9f37-33850a4d9307 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.559 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "db72b066-1974-41bb-a917-13b5ba129196", "name": "test_0", "status": "ACTIVE", "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "user_id": "3b810e864d6c4d058e539f62ad181096", "metadata": {}, "hostId": "968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d", "image": {"id": "ca09b2c0-a624-4fb0-b624-b8d92d761f4a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ca09b2c0-a624-4fb0-b624-b8d92d761f4a"}]}, "flavor": {"id": "aa9783c0-34c0-4a4d-bc86-59429edc9395", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aa9783c0-34c0-4a4d-bc86-59429edc9395"}]}, "created": "2025-12-01T22:31:59Z", "updated": "2025-12-01T22:32:15Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.177", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:78:3f:bd"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:78:3f:bd"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/db72b066-1974-41bb-a917-13b5ba129196"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/db72b066-1974-41bb-a917-13b5ba129196"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T22:32:15.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.559 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/db72b066-1974-41bb-a917-13b5ba129196 used request id req-c63c666d-8841-474b-9f37-33850a4d9307 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.561 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:33:36.561873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.570 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for db72b066-1974-41bb-a917-13b5ba129196 / tap64f1c8ea-4a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.571 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.574 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.576 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.576 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:33:36.574172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:33:36.576620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:33:36.578784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.620 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.620 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.620 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.621 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.622 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.621 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:33:36.622902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.703 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.704 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.721 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.722 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.722 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.723 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.723 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.723 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.724 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.724 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:33:36.723617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.726 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.726 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.726 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:33:36.725215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.727 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.728 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.728 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.729 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.729 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.729 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.730 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.731 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:33:36.726502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:33:36.727870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:33:36.729059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:33:36.730380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:33:36.731593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.763 189512 INFO nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Creating config drive at /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.config#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.772 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14elx5is execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.774 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 32050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.775 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.775 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.775 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.775 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.775 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.776 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.776 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.776 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.777 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.778 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.779 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.779 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.780 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.781 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.782 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:33:36.775710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:33:36.776560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:33:36.777538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T22:33:36.778857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:33:36.780094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:33:36.780862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:33:36.781732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.783 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.784 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:33:36.783160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.785 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.786 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.786 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.786 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.786 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.786 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.788 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.788 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:33:36.784081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.788 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.788 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.788 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.788 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.791 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:33:36.784877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:33:36.785846) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:33:36.786713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T22:33:36.787508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:33:36.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:33:36.788448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.826 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.827 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.892 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:36 compute-0 nova_compute[189508]: 2025-12-01 22:33:36.919 189512 DEBUG oslo_concurrency.processutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14elx5is" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:33:37 compute-0 kernel: tap112b3e51-47: entered promiscuous mode
Dec  1 22:33:37 compute-0 NetworkManager[56278]: <info>  [1764628417.0440] manager: (tap112b3e51-47): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec  1 22:33:37 compute-0 ovn_controller[97770]: 2025-12-01T22:33:37Z|00035|binding|INFO|Claiming lport 112b3e51-47c2-499f-9108-af9d45576c1e for this chassis.
Dec  1 22:33:37 compute-0 ovn_controller[97770]: 2025-12-01T22:33:37Z|00036|binding|INFO|112b3e51-47c2-499f-9108-af9d45576c1e: Claiming fa:16:3e:96:04:8b 192.168.0.23
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.055 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.058 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:04:8b 192.168.0.23'], port_security=['fa:16:3e:96:04:8b 192.168.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-37pfkxggku2d-mb7dw7aouq46-553w42hrmnbi-port-am2gni7fe4iu', 'neutron:cidrs': '192.168.0.23/24', 'neutron:device_id': 'ef18b98f-df89-44d0-9215-5c2e556e10be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-37pfkxggku2d-mb7dw7aouq46-553w42hrmnbi-port-am2gni7fe4iu', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=112b3e51-47c2-499f-9108-af9d45576c1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.061 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 112b3e51-47c2-499f-9108-af9d45576c1e in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c bound to our chassis#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.064 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c#033[00m
Dec  1 22:33:37 compute-0 ovn_controller[97770]: 2025-12-01T22:33:37Z|00037|binding|INFO|Setting lport 112b3e51-47c2-499f-9108-af9d45576c1e ovn-installed in OVS
Dec  1 22:33:37 compute-0 ovn_controller[97770]: 2025-12-01T22:33:37Z|00038|binding|INFO|Setting lport 112b3e51-47c2-499f-9108-af9d45576c1e up in Southbound
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.078 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.082 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.095 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b527e6-172f-4715-96ac-73e441e2a508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:33:37 compute-0 systemd-machined[155759]: New machine qemu-2-instance-00000002.
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.123 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[7e63b147-90b1-41c6-931f-25c7b91962bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:33:37 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.127 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[df144a42-5a8b-4a88-a99c-8f8e87180c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:33:37 compute-0 systemd-udevd[240731]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:33:37 compute-0 NetworkManager[56278]: <info>  [1764628417.1574] device (tap112b3e51-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:33:37 compute-0 NetworkManager[56278]: <info>  [1764628417.1640] device (tap112b3e51-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.163 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[04c2d9e5-1227-4651-9c8c-8808997a9f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.191 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e83f66e0-96a0-4352-b60a-a708e9d379cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 21435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240735, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.212 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1948df-74f2-4c4b-899d-d40e23e14b2a]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384779, 'tstamp': 384779}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240741, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384784, 'tstamp': 384784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240741, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.213 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.215 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.217 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.217 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd6e3c27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.217 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.218 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd6e3c27-10, col_values=(('external_ids', {'iface-id': 'e303b09b-4673-4950-aa2d-91085a5bc5f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:33:37 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:33:37.218 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.472 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.473 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5225MB free_disk=72.2026252746582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.473 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.474 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.587 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.587 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.587 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.588 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.684 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.700 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.722 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.723 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.754 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628417.753263, ef18b98f-df89-44d0-9215-5c2e556e10be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.755 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] VM Started (Lifecycle Event)#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.776 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.786 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628417.7535179, ef18b98f-df89-44d0-9215-5c2e556e10be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.787 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.831 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.840 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.868 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.962 189512 DEBUG nova.compute.manager [req-0242c433-acf9-4549-9092-ca8a046ac243 req-4bc0a706-7963-41cf-adfb-f0592d562c78 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received event network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.963 189512 DEBUG oslo_concurrency.lockutils [req-0242c433-acf9-4549-9092-ca8a046ac243 req-4bc0a706-7963-41cf-adfb-f0592d562c78 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.963 189512 DEBUG oslo_concurrency.lockutils [req-0242c433-acf9-4549-9092-ca8a046ac243 req-4bc0a706-7963-41cf-adfb-f0592d562c78 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.964 189512 DEBUG oslo_concurrency.lockutils [req-0242c433-acf9-4549-9092-ca8a046ac243 req-4bc0a706-7963-41cf-adfb-f0592d562c78 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.964 189512 DEBUG nova.compute.manager [req-0242c433-acf9-4549-9092-ca8a046ac243 req-4bc0a706-7963-41cf-adfb-f0592d562c78 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Processing event network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.965 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.971 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628417.9710433, ef18b98f-df89-44d0-9215-5c2e556e10be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.973 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.977 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.984 189512 INFO nova.virt.libvirt.driver [-] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Instance spawned successfully.#033[00m
Dec  1 22:33:37 compute-0 nova_compute[189508]: 2025-12-01 22:33:37.984 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.009 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.035 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.201 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.201 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.202 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.202 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.203 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.203 189512 DEBUG nova.virt.libvirt.driver [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.207 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.216 189512 DEBUG nova.network.neutron [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updated VIF entry in instance network info cache for port 112b3e51-47c2-499f-9108-af9d45576c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.217 189512 DEBUG nova.network.neutron [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.290 189512 DEBUG oslo_concurrency.lockutils [req-aefa6758-1744-4c0d-8095-9c4b8fdd722d req-b9bef73c-af32-4895-9b14-d174a322eb8b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.299 189512 INFO nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Took 11.68 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.299 189512 DEBUG nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.370 189512 INFO nova.compute.manager [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Took 12.35 seconds to build instance.#033[00m
Dec  1 22:33:38 compute-0 nova_compute[189508]: 2025-12-01 22:33:38.385 189512 DEBUG oslo_concurrency.lockutils [None req-52de3a40-d531-4aa7-ba84-7002f7835a03 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:39 compute-0 nova_compute[189508]: 2025-12-01 22:33:39.519 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:40 compute-0 nova_compute[189508]: 2025-12-01 22:33:40.076 189512 DEBUG nova.compute.manager [req-2158242c-3ed7-4381-888f-d76f6add0ab5 req-7ea20594-c4d9-44f7-96cb-3aff0f1587be c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received event network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:33:40 compute-0 nova_compute[189508]: 2025-12-01 22:33:40.076 189512 DEBUG oslo_concurrency.lockutils [req-2158242c-3ed7-4381-888f-d76f6add0ab5 req-7ea20594-c4d9-44f7-96cb-3aff0f1587be c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:33:40 compute-0 nova_compute[189508]: 2025-12-01 22:33:40.077 189512 DEBUG oslo_concurrency.lockutils [req-2158242c-3ed7-4381-888f-d76f6add0ab5 req-7ea20594-c4d9-44f7-96cb-3aff0f1587be c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:33:40 compute-0 nova_compute[189508]: 2025-12-01 22:33:40.077 189512 DEBUG oslo_concurrency.lockutils [req-2158242c-3ed7-4381-888f-d76f6add0ab5 req-7ea20594-c4d9-44f7-96cb-3aff0f1587be c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:33:40 compute-0 nova_compute[189508]: 2025-12-01 22:33:40.077 189512 DEBUG nova.compute.manager [req-2158242c-3ed7-4381-888f-d76f6add0ab5 req-7ea20594-c4d9-44f7-96cb-3aff0f1587be c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] No waiting events found dispatching network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:33:40 compute-0 nova_compute[189508]: 2025-12-01 22:33:40.077 189512 WARNING nova.compute.manager [req-2158242c-3ed7-4381-888f-d76f6add0ab5 req-7ea20594-c4d9-44f7-96cb-3aff0f1587be c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received unexpected event network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e for instance with vm_state active and task_state None.#033[00m
Dec  1 22:33:40 compute-0 nova_compute[189508]: 2025-12-01 22:33:40.782 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:43 compute-0 podman[240750]: 2025-12-01 22:33:43.868228033 +0000 UTC m=+0.137538190 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:33:44 compute-0 nova_compute[189508]: 2025-12-01 22:33:44.522 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:45 compute-0 nova_compute[189508]: 2025-12-01 22:33:45.787 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:46 compute-0 podman[240771]: 2025-12-01 22:33:46.875702471 +0000 UTC m=+0.141571311 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:33:48 compute-0 podman[240791]: 2025-12-01 22:33:48.858108282 +0000 UTC m=+0.123679820 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 22:33:49 compute-0 nova_compute[189508]: 2025-12-01 22:33:49.525 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:50 compute-0 nova_compute[189508]: 2025-12-01 22:33:50.791 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:53 compute-0 podman[240812]: 2025-12-01 22:33:53.830002717 +0000 UTC m=+0.103141485 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 22:33:53 compute-0 podman[240811]: 2025-12-01 22:33:53.878248413 +0000 UTC m=+0.149201331 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 22:33:54 compute-0 nova_compute[189508]: 2025-12-01 22:33:54.529 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:55 compute-0 nova_compute[189508]: 2025-12-01 22:33:55.798 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:58 compute-0 podman[240854]: 2025-12-01 22:33:58.86223672 +0000 UTC m=+0.132144412 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public)
Dec  1 22:33:58 compute-0 podman[240853]: 2025-12-01 22:33:58.859223898 +0000 UTC m=+0.119129545 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm)
Dec  1 22:33:59 compute-0 nova_compute[189508]: 2025-12-01 22:33:59.532 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:33:59 compute-0 podman[203693]: time="2025-12-01T22:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:33:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:33:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4774 "" "Go-http-client/1.1"
Dec  1 22:34:00 compute-0 nova_compute[189508]: 2025-12-01 22:34:00.804 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:00 compute-0 podman[240891]: 2025-12-01 22:34:00.826260616 +0000 UTC m=+0.099000611 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:34:00 compute-0 podman[240892]: 2025-12-01 22:34:00.870256905 +0000 UTC m=+0.125193741 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=edpm, container_name=kepler, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 22:34:01 compute-0 openstack_network_exporter[205887]: ERROR   22:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:34:01 compute-0 openstack_network_exporter[205887]: ERROR   22:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:34:01 compute-0 openstack_network_exporter[205887]: ERROR   22:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:34:01 compute-0 openstack_network_exporter[205887]: ERROR   22:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:34:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:34:01 compute-0 openstack_network_exporter[205887]: ERROR   22:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:34:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:34:04 compute-0 nova_compute[189508]: 2025-12-01 22:34:04.536 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:34:04.607 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:34:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:34:04.607 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:34:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:34:04.608 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:34:05 compute-0 nova_compute[189508]: 2025-12-01 22:34:05.808 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:07 compute-0 ovn_controller[97770]: 2025-12-01T22:34:07Z|00039|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  1 22:34:09 compute-0 nova_compute[189508]: 2025-12-01 22:34:09.540 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:10 compute-0 nova_compute[189508]: 2025-12-01 22:34:10.812 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:11 compute-0 ovn_controller[97770]: 2025-12-01T22:34:11Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:04:8b 192.168.0.23
Dec  1 22:34:11 compute-0 ovn_controller[97770]: 2025-12-01T22:34:11Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:04:8b 192.168.0.23
Dec  1 22:34:14 compute-0 nova_compute[189508]: 2025-12-01 22:34:14.545 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:14 compute-0 podman[240945]: 2025-12-01 22:34:14.837923651 +0000 UTC m=+0.120543084 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 22:34:15 compute-0 nova_compute[189508]: 2025-12-01 22:34:15.817 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:17 compute-0 podman[240969]: 2025-12-01 22:34:17.86186013 +0000 UTC m=+0.126236160 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:34:19 compute-0 nova_compute[189508]: 2025-12-01 22:34:19.548 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:19 compute-0 podman[240989]: 2025-12-01 22:34:19.891980393 +0000 UTC m=+0.166063894 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 22:34:20 compute-0 nova_compute[189508]: 2025-12-01 22:34:20.823 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:24 compute-0 nova_compute[189508]: 2025-12-01 22:34:24.551 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:24 compute-0 podman[241011]: 2025-12-01 22:34:24.854744797 +0000 UTC m=+0.117229113 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:34:24 compute-0 podman[241010]: 2025-12-01 22:34:24.920082922 +0000 UTC m=+0.193127408 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:34:25 compute-0 nova_compute[189508]: 2025-12-01 22:34:25.828 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:29 compute-0 nova_compute[189508]: 2025-12-01 22:34:29.553 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:29 compute-0 podman[203693]: time="2025-12-01T22:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:34:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:34:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4772 "" "Go-http-client/1.1"
Dec  1 22:34:29 compute-0 podman[241056]: 2025-12-01 22:34:29.834512357 +0000 UTC m=+0.098744464 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 22:34:29 compute-0 podman[241055]: 2025-12-01 22:34:29.855045201 +0000 UTC m=+0.138881817 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:34:30 compute-0 nova_compute[189508]: 2025-12-01 22:34:30.837 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:30 compute-0 nova_compute[189508]: 2025-12-01 22:34:30.893 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:30 compute-0 nova_compute[189508]: 2025-12-01 22:34:30.895 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:30 compute-0 nova_compute[189508]: 2025-12-01 22:34:30.923 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:30 compute-0 nova_compute[189508]: 2025-12-01 22:34:30.923 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:30 compute-0 nova_compute[189508]: 2025-12-01 22:34:30.924 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:30 compute-0 nova_compute[189508]: 2025-12-01 22:34:30.924 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:34:31 compute-0 nova_compute[189508]: 2025-12-01 22:34:31.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:31 compute-0 nova_compute[189508]: 2025-12-01 22:34:31.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:34:31 compute-0 nova_compute[189508]: 2025-12-01 22:34:31.203 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:34:31 compute-0 openstack_network_exporter[205887]: ERROR   22:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:34:31 compute-0 openstack_network_exporter[205887]: ERROR   22:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:34:31 compute-0 openstack_network_exporter[205887]: ERROR   22:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:34:31 compute-0 openstack_network_exporter[205887]: ERROR   22:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:34:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:34:31 compute-0 openstack_network_exporter[205887]: ERROR   22:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:34:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:34:31 compute-0 podman[241094]: 2025-12-01 22:34:31.851238672 +0000 UTC m=+0.109504250 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:34:31 compute-0 nova_compute[189508]: 2025-12-01 22:34:31.859 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:34:31 compute-0 nova_compute[189508]: 2025-12-01 22:34:31.860 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:34:31 compute-0 nova_compute[189508]: 2025-12-01 22:34:31.860 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:34:31 compute-0 nova_compute[189508]: 2025-12-01 22:34:31.860 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:34:31 compute-0 podman[241095]: 2025-12-01 22:34:31.883011415 +0000 UTC m=+0.139070263 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  1 22:34:34 compute-0 nova_compute[189508]: 2025-12-01 22:34:34.557 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.496 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.523 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.524 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.525 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.526 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.527 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.528 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.562 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.563 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.563 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.564 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.692 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.772 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.774 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.836 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.837 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.861 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.958 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:35 compute-0 nova_compute[189508]: 2025-12-01 22:34:35.960 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.058 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.070 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.164 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.166 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.236 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.238 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.295 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.297 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:34:36 compute-0 nova_compute[189508]: 2025-12-01 22:34:36.367 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.000 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.002 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5048MB free_disk=72.1805305480957GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.002 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.003 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.107 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.107 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.108 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.108 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.214 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.239 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.274 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:34:37 compute-0 nova_compute[189508]: 2025-12-01 22:34:37.275 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:34:39 compute-0 nova_compute[189508]: 2025-12-01 22:34:39.562 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:40 compute-0 nova_compute[189508]: 2025-12-01 22:34:40.866 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:44 compute-0 nova_compute[189508]: 2025-12-01 22:34:44.566 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:45 compute-0 podman[241161]: 2025-12-01 22:34:45.853577212 +0000 UTC m=+0.112543216 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:34:45 compute-0 nova_compute[189508]: 2025-12-01 22:34:45.871 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:48 compute-0 podman[241185]: 2025-12-01 22:34:48.85842434 +0000 UTC m=+0.126564616 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:34:49 compute-0 nova_compute[189508]: 2025-12-01 22:34:49.569 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:50 compute-0 podman[241207]: 2025-12-01 22:34:50.876724259 +0000 UTC m=+0.149124198 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 22:34:50 compute-0 nova_compute[189508]: 2025-12-01 22:34:50.876 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:54 compute-0 nova_compute[189508]: 2025-12-01 22:34:54.572 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:55 compute-0 nova_compute[189508]: 2025-12-01 22:34:55.880 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:55 compute-0 podman[241231]: 2025-12-01 22:34:55.904632632 +0000 UTC m=+0.155468569 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:34:55 compute-0 podman[241230]: 2025-12-01 22:34:55.936189711 +0000 UTC m=+0.197116925 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:34:59 compute-0 nova_compute[189508]: 2025-12-01 22:34:59.575 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:34:59 compute-0 podman[203693]: time="2025-12-01T22:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:34:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:34:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Dec  1 22:35:00 compute-0 nova_compute[189508]: 2025-12-01 22:35:00.883 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:00 compute-0 podman[241275]: 2025-12-01 22:35:00.8850188 +0000 UTC m=+0.150227810 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:35:00 compute-0 podman[241276]: 2025-12-01 22:35:00.905185674 +0000 UTC m=+0.166758550 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9)
Dec  1 22:35:01 compute-0 openstack_network_exporter[205887]: ERROR   22:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:35:01 compute-0 openstack_network_exporter[205887]: ERROR   22:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:35:01 compute-0 openstack_network_exporter[205887]: ERROR   22:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:35:01 compute-0 openstack_network_exporter[205887]: ERROR   22:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:35:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:35:01 compute-0 openstack_network_exporter[205887]: ERROR   22:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:35:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:35:02 compute-0 podman[241314]: 2025-12-01 22:35:02.809808737 +0000 UTC m=+0.090497868 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:35:02 compute-0 podman[241315]: 2025-12-01 22:35:02.865491243 +0000 UTC m=+0.126990968 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=)
Dec  1 22:35:04 compute-0 nova_compute[189508]: 2025-12-01 22:35:04.578 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:35:04.608 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:35:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:35:04.609 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:35:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:35:04.610 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:35:05 compute-0 nova_compute[189508]: 2025-12-01 22:35:05.887 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:09 compute-0 nova_compute[189508]: 2025-12-01 22:35:09.586 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:10 compute-0 nova_compute[189508]: 2025-12-01 22:35:10.891 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:14 compute-0 nova_compute[189508]: 2025-12-01 22:35:14.591 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:15 compute-0 nova_compute[189508]: 2025-12-01 22:35:15.897 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:16 compute-0 podman[241357]: 2025-12-01 22:35:16.864477178 +0000 UTC m=+0.129300084 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:35:19 compute-0 nova_compute[189508]: 2025-12-01 22:35:19.595 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:19 compute-0 podman[241383]: 2025-12-01 22:35:19.842966264 +0000 UTC m=+0.108597774 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:35:20 compute-0 nova_compute[189508]: 2025-12-01 22:35:20.904 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:21 compute-0 podman[241403]: 2025-12-01 22:35:21.832870255 +0000 UTC m=+0.112311770 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 22:35:24 compute-0 nova_compute[189508]: 2025-12-01 22:35:24.598 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:25 compute-0 nova_compute[189508]: 2025-12-01 22:35:25.910 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:26 compute-0 podman[241424]: 2025-12-01 22:35:26.879362067 +0000 UTC m=+0.143929420 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 22:35:26 compute-0 podman[241423]: 2025-12-01 22:35:26.969604648 +0000 UTC m=+0.232102392 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:35:29 compute-0 nova_compute[189508]: 2025-12-01 22:35:29.602 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:29 compute-0 podman[203693]: time="2025-12-01T22:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:35:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:35:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4765 "" "Go-http-client/1.1"
Dec  1 22:35:30 compute-0 nova_compute[189508]: 2025-12-01 22:35:30.915 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:30 compute-0 nova_compute[189508]: 2025-12-01 22:35:30.948 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:30 compute-0 nova_compute[189508]: 2025-12-01 22:35:30.950 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:30 compute-0 nova_compute[189508]: 2025-12-01 22:35:30.951 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:30 compute-0 nova_compute[189508]: 2025-12-01 22:35:30.952 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:30 compute-0 nova_compute[189508]: 2025-12-01 22:35:30.953 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:35:31 compute-0 nova_compute[189508]: 2025-12-01 22:35:31.203 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:31 compute-0 nova_compute[189508]: 2025-12-01 22:35:31.204 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:35:31 compute-0 openstack_network_exporter[205887]: ERROR   22:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:35:31 compute-0 openstack_network_exporter[205887]: ERROR   22:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:35:31 compute-0 openstack_network_exporter[205887]: ERROR   22:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:35:31 compute-0 openstack_network_exporter[205887]: ERROR   22:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:35:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:35:31 compute-0 openstack_network_exporter[205887]: ERROR   22:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:35:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:35:31 compute-0 podman[241465]: 2025-12-01 22:35:31.815758104 +0000 UTC m=+0.098095584 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:35:31 compute-0 podman[241466]: 2025-12-01 22:35:31.835999711 +0000 UTC m=+0.105248089 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7)
Dec  1 22:35:31 compute-0 nova_compute[189508]: 2025-12-01 22:35:31.900 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:35:31 compute-0 nova_compute[189508]: 2025-12-01 22:35:31.900 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:35:31 compute-0 nova_compute[189508]: 2025-12-01 22:35:31.901 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:35:33 compute-0 podman[241507]: 2025-12-01 22:35:33.063452598 +0000 UTC m=+0.141846021 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:35:33 compute-0 podman[241508]: 2025-12-01 22:35:33.06318957 +0000 UTC m=+0.136948221 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git)
Dec  1 22:35:33 compute-0 nova_compute[189508]: 2025-12-01 22:35:33.750 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:35:33 compute-0 nova_compute[189508]: 2025-12-01 22:35:33.876 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:35:33 compute-0 nova_compute[189508]: 2025-12-01 22:35:33.877 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:35:34 compute-0 nova_compute[189508]: 2025-12-01 22:35:34.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:34 compute-0 nova_compute[189508]: 2025-12-01 22:35:34.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:34 compute-0 nova_compute[189508]: 2025-12-01 22:35:34.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:34 compute-0 nova_compute[189508]: 2025-12-01 22:35:34.604 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.265 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.265 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.265 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c323f440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.277 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.281 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ef18b98f-df89-44d0-9215-5c2e556e10be from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:35:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:35.283 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ef18b98f-df89-44d0-9215-5c2e556e10be -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:35:35 compute-0 nova_compute[189508]: 2025-12-01 22:35:35.923 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.233 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.234 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.234 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.235 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.284 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 01 Dec 2025 22:35:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a37d0899-071f-4c4f-9372-4ea0c99c2704 x-openstack-request-id: req-a37d0899-071f-4c4f-9372-4ea0c99c2704 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.285 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ef18b98f-df89-44d0-9215-5c2e556e10be", "name": "vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv", "status": "ACTIVE", "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "user_id": "3b810e864d6c4d058e539f62ad181096", "metadata": {"metering.server_group": "40d7879f-33f5-4fcb-8784-d9088730e18f"}, "hostId": "968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d", "image": {"id": "ca09b2c0-a624-4fb0-b624-b8d92d761f4a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ca09b2c0-a624-4fb0-b624-b8d92d761f4a"}]}, "flavor": {"id": "aa9783c0-34c0-4a4d-bc86-59429edc9395", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aa9783c0-34c0-4a4d-bc86-59429edc9395"}]}, "created": "2025-12-01T22:33:23Z", "updated": "2025-12-01T22:33:38Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.23", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:96:04:8b"}, {"version": 4, "addr": "192.168.122.175", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:96:04:8b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ef18b98f-df89-44d0-9215-5c2e556e10be"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ef18b98f-df89-44d0-9215-5c2e556e10be"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T22:33:38.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.285 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ef18b98f-df89-44d0-9215-5c2e556e10be used request id req-a37d0899-071f-4c4f-9372-4ea0c99c2704 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.286 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ef18b98f-df89-44d0-9215-5c2e556e10be', 'name': 'vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:35:36.287661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.293 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.297 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ef18b98f-df89-44d0-9215-5c2e556e10be / tap112b3e51-47 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.297 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets volume: 37 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.299 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.299 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:35:36.299220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.301 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:35:36.301116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.301 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.302 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:35:36.302855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.332 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.337 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.338 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.338 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.369 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.370 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.370 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.371 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:35:36.372336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.436 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.439 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.455 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.456 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.457 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.507 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.510 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.556 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.557 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.557 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.558 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.559 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.559 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.560 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.560 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 493804988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.561 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 100192430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:35:36.559216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.561 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 68791964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.563 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.564 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.564 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:35:36.563933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.565 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.566 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.566 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.567 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.569 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.569 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.571 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.571 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.572 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.573 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.574 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.575 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.575 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.576 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:35:36.569108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.576 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:35:36.575007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.576 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.577 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.578 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.579 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.579 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.579 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:35:36.578586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.580 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.580 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.582 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.583 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.583 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.583 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 2000404700 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.584 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 11549778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.584 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.586 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:35:36.582268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:35:36.586601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.588 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.589 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.618 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 33950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.656 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/cpu volume: 68850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.658 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.658 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:35:36.657257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.659 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.659 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:35:36.658904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.660 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.660 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.660 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.661 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.661 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.661 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.661 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.662 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.663 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:35:36.660575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T22:35:36.662872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.663 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv>]
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.664 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.664 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.664 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:35:36.664350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:35:36.665428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.665 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.666 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.666 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.666 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.667 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.667 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:35:36.666849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.668 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:35:36.668076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.668 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.669 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:35:36.669599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.670 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.671 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes volume: 4516 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.671 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:35:36.670820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.672 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.672 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:35:36.672113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:35:36.673443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.673 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.674 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.675 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T22:35:36.675036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.675 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv>]
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.675 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.676 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.676 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:35:36.676113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.677 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.677 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.677 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.677 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.677 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.677 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.678 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:35:36.679 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.699 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.708 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.812 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.815 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.905 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.906 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.966 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:36 compute-0 nova_compute[189508]: 2025-12-01 22:35:36.967 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.032 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.607 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.609 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5034MB free_disk=72.18051147460938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.610 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.611 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.711 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.712 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.712 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.713 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.786 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.801 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.804 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:35:37 compute-0 nova_compute[189508]: 2025-12-01 22:35:37.805 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:35:39 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 22:35:39 compute-0 nova_compute[189508]: 2025-12-01 22:35:39.606 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:40 compute-0 nova_compute[189508]: 2025-12-01 22:35:40.928 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:44 compute-0 nova_compute[189508]: 2025-12-01 22:35:44.609 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:45 compute-0 nova_compute[189508]: 2025-12-01 22:35:45.932 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:47 compute-0 podman[241583]: 2025-12-01 22:35:47.893074079 +0000 UTC m=+0.162238242 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:35:49 compute-0 nova_compute[189508]: 2025-12-01 22:35:49.612 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:50 compute-0 podman[241608]: 2025-12-01 22:35:50.88296495 +0000 UTC m=+0.147079380 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:35:50 compute-0 nova_compute[189508]: 2025-12-01 22:35:50.938 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:52 compute-0 podman[241628]: 2025-12-01 22:35:52.836867367 +0000 UTC m=+0.101713828 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 22:35:54 compute-0 nova_compute[189508]: 2025-12-01 22:35:54.617 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:55 compute-0 nova_compute[189508]: 2025-12-01 22:35:55.943 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:57 compute-0 podman[241649]: 2025-12-01 22:35:57.902058051 +0000 UTC m=+0.152944187 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  1 22:35:57 compute-0 podman[241648]: 2025-12-01 22:35:57.92414769 +0000 UTC m=+0.183963250 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  1 22:35:59 compute-0 nova_compute[189508]: 2025-12-01 22:35:59.620 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:35:59 compute-0 podman[203693]: time="2025-12-01T22:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:35:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:35:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  1 22:36:00 compute-0 nova_compute[189508]: 2025-12-01 22:36:00.949 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:01 compute-0 openstack_network_exporter[205887]: ERROR   22:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:36:01 compute-0 openstack_network_exporter[205887]: ERROR   22:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:36:01 compute-0 openstack_network_exporter[205887]: ERROR   22:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:36:01 compute-0 openstack_network_exporter[205887]: ERROR   22:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:36:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:36:01 compute-0 openstack_network_exporter[205887]: ERROR   22:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:36:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:36:02 compute-0 podman[241692]: 2025-12-01 22:36:02.805450978 +0000 UTC m=+0.081685808 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  1 22:36:02 compute-0 podman[241691]: 2025-12-01 22:36:02.815748221 +0000 UTC m=+0.098288890 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  1 22:36:03 compute-0 podman[241732]: 2025-12-01 22:36:03.780721853 +0000 UTC m=+0.065582129 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:36:03 compute-0 podman[241733]: 2025-12-01 22:36:03.816157462 +0000 UTC m=+0.097910789 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 22:36:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:36:04.609 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:36:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:36:04.610 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:36:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:36:04.611 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:36:04 compute-0 nova_compute[189508]: 2025-12-01 22:36:04.622 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:05 compute-0 nova_compute[189508]: 2025-12-01 22:36:05.954 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:09 compute-0 nova_compute[189508]: 2025-12-01 22:36:09.626 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:10 compute-0 nova_compute[189508]: 2025-12-01 22:36:10.959 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:14 compute-0 nova_compute[189508]: 2025-12-01 22:36:14.628 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:15 compute-0 nova_compute[189508]: 2025-12-01 22:36:15.965 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:18 compute-0 podman[241770]: 2025-12-01 22:36:18.857080471 +0000 UTC m=+0.122521501 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:36:19 compute-0 nova_compute[189508]: 2025-12-01 22:36:19.632 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:20 compute-0 nova_compute[189508]: 2025-12-01 22:36:20.970 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:21 compute-0 podman[241796]: 2025-12-01 22:36:21.861679752 +0000 UTC m=+0.125708872 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:36:23 compute-0 podman[241816]: 2025-12-01 22:36:23.886675982 +0000 UTC m=+0.148703696 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:36:24 compute-0 nova_compute[189508]: 2025-12-01 22:36:24.637 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:25 compute-0 nova_compute[189508]: 2025-12-01 22:36:25.975 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:28 compute-0 nova_compute[189508]: 2025-12-01 22:36:28.801 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:28 compute-0 podman[241837]: 2025-12-01 22:36:28.853586058 +0000 UTC m=+0.102699656 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:36:28 compute-0 podman[241836]: 2025-12-01 22:36:28.884332644 +0000 UTC m=+0.140653037 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 22:36:29 compute-0 nova_compute[189508]: 2025-12-01 22:36:29.640 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:29 compute-0 podman[203693]: time="2025-12-01T22:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:36:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:36:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Dec  1 22:36:30 compute-0 nova_compute[189508]: 2025-12-01 22:36:30.215 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:30 compute-0 nova_compute[189508]: 2025-12-01 22:36:30.979 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:31 compute-0 nova_compute[189508]: 2025-12-01 22:36:31.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:31 compute-0 openstack_network_exporter[205887]: ERROR   22:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:36:31 compute-0 openstack_network_exporter[205887]: ERROR   22:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:36:31 compute-0 openstack_network_exporter[205887]: ERROR   22:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:36:31 compute-0 openstack_network_exporter[205887]: ERROR   22:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:36:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:36:31 compute-0 openstack_network_exporter[205887]: ERROR   22:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:36:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:36:32 compute-0 nova_compute[189508]: 2025-12-01 22:36:32.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:32 compute-0 nova_compute[189508]: 2025-12-01 22:36:32.198 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:36:32 compute-0 nova_compute[189508]: 2025-12-01 22:36:32.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:36:32 compute-0 nova_compute[189508]: 2025-12-01 22:36:32.928 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:36:32 compute-0 nova_compute[189508]: 2025-12-01 22:36:32.928 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:36:32 compute-0 nova_compute[189508]: 2025-12-01 22:36:32.928 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:36:32 compute-0 nova_compute[189508]: 2025-12-01 22:36:32.929 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:36:33 compute-0 podman[241881]: 2025-12-01 22:36:33.847568004 +0000 UTC m=+0.127182653 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:36:33 compute-0 podman[241882]: 2025-12-01 22:36:33.879494693 +0000 UTC m=+0.146346619 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7)
Dec  1 22:36:33 compute-0 podman[241917]: 2025-12-01 22:36:33.939342748 +0000 UTC m=+0.087528284 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, architecture=x86_64, container_name=kepler, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=)
Dec  1 22:36:33 compute-0 podman[241914]: 2025-12-01 22:36:33.96223008 +0000 UTC m=+0.112504575 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.095 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.115 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.116 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.119 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.120 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.121 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:34 compute-0 nova_compute[189508]: 2025-12-01 22:36:34.646 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:35 compute-0 nova_compute[189508]: 2025-12-01 22:36:35.983 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.256 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.257 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.258 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.259 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.361 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.477 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.479 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.548 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.550 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.629 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.631 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.727 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.738 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.835 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.837 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.932 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:36 compute-0 nova_compute[189508]: 2025-12-01 22:36:36.934 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.036 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.040 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.134 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.556 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.558 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5036MB free_disk=72.1805305480957GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.559 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.560 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.660 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.661 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.662 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.662 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.740 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.758 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.760 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:36:37 compute-0 nova_compute[189508]: 2025-12-01 22:36:37.761 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:36:39 compute-0 nova_compute[189508]: 2025-12-01 22:36:39.648 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:40 compute-0 nova_compute[189508]: 2025-12-01 22:36:40.987 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:44 compute-0 nova_compute[189508]: 2025-12-01 22:36:44.652 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:45 compute-0 nova_compute[189508]: 2025-12-01 22:36:45.992 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:49 compute-0 nova_compute[189508]: 2025-12-01 22:36:49.656 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:49 compute-0 podman[241985]: 2025-12-01 22:36:49.83113997 +0000 UTC m=+0.100586745 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 22:36:50 compute-0 nova_compute[189508]: 2025-12-01 22:36:50.996 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:52 compute-0 podman[242008]: 2025-12-01 22:36:52.855730429 +0000 UTC m=+0.123207890 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec  1 22:36:54 compute-0 nova_compute[189508]: 2025-12-01 22:36:54.663 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:54 compute-0 podman[242031]: 2025-12-01 22:36:54.879869855 +0000 UTC m=+0.142713445 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Dec  1 22:36:56 compute-0 nova_compute[189508]: 2025-12-01 22:36:56.004 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:59 compute-0 nova_compute[189508]: 2025-12-01 22:36:59.665 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:36:59 compute-0 podman[203693]: time="2025-12-01T22:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:36:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:36:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4774 "" "Go-http-client/1.1"
Dec  1 22:36:59 compute-0 podman[242052]: 2025-12-01 22:36:59.881757098 +0000 UTC m=+0.138311273 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 22:36:59 compute-0 podman[242051]: 2025-12-01 22:36:59.985016402 +0000 UTC m=+0.250348957 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 22:37:01 compute-0 nova_compute[189508]: 2025-12-01 22:37:01.010 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:01 compute-0 openstack_network_exporter[205887]: ERROR   22:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:37:01 compute-0 openstack_network_exporter[205887]: ERROR   22:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:37:01 compute-0 openstack_network_exporter[205887]: ERROR   22:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:37:01 compute-0 openstack_network_exporter[205887]: ERROR   22:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:37:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:37:01 compute-0 openstack_network_exporter[205887]: ERROR   22:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:37:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:37:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:37:04.610 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:37:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:37:04.611 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:37:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:37:04.613 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:37:04 compute-0 nova_compute[189508]: 2025-12-01 22:37:04.667 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:04 compute-0 podman[242096]: 2025-12-01 22:37:04.851664091 +0000 UTC m=+0.110664896 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:37:04 compute-0 podman[242095]: 2025-12-01 22:37:04.862196241 +0000 UTC m=+0.124738977 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:37:04 compute-0 podman[242097]: 2025-12-01 22:37:04.868252473 +0000 UTC m=+0.130728597 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 22:37:04 compute-0 podman[242098]: 2025-12-01 22:37:04.887359408 +0000 UTC m=+0.146181548 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9)
Dec  1 22:37:06 compute-0 nova_compute[189508]: 2025-12-01 22:37:06.014 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:09 compute-0 nova_compute[189508]: 2025-12-01 22:37:09.669 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:11 compute-0 nova_compute[189508]: 2025-12-01 22:37:11.017 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:14 compute-0 nova_compute[189508]: 2025-12-01 22:37:14.672 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:16 compute-0 nova_compute[189508]: 2025-12-01 22:37:16.019 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:19 compute-0 nova_compute[189508]: 2025-12-01 22:37:19.675 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:20 compute-0 podman[242175]: 2025-12-01 22:37:20.880067232 +0000 UTC m=+0.139880088 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:37:21 compute-0 nova_compute[189508]: 2025-12-01 22:37:21.022 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:23 compute-0 podman[242196]: 2025-12-01 22:37:23.834236538 +0000 UTC m=+0.099693293 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  1 22:37:24 compute-0 nova_compute[189508]: 2025-12-01 22:37:24.679 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:25 compute-0 podman[242214]: 2025-12-01 22:37:25.867929987 +0000 UTC m=+0.133771034 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:37:26 compute-0 nova_compute[189508]: 2025-12-01 22:37:26.026 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:29 compute-0 nova_compute[189508]: 2025-12-01 22:37:29.683 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:29 compute-0 podman[203693]: time="2025-12-01T22:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:37:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:37:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4764 "" "Go-http-client/1.1"
Dec  1 22:37:30 compute-0 podman[242234]: 2025-12-01 22:37:30.857671604 +0000 UTC m=+0.125964882 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:37:30 compute-0 podman[242233]: 2025-12-01 22:37:30.893903426 +0000 UTC m=+0.167889986 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true)
Dec  1 22:37:31 compute-0 nova_compute[189508]: 2025-12-01 22:37:31.031 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:31 compute-0 openstack_network_exporter[205887]: ERROR   22:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:37:31 compute-0 openstack_network_exporter[205887]: ERROR   22:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:37:31 compute-0 openstack_network_exporter[205887]: ERROR   22:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:37:31 compute-0 openstack_network_exporter[205887]: ERROR   22:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:37:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:37:31 compute-0 openstack_network_exporter[205887]: ERROR   22:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:37:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:37:32 compute-0 nova_compute[189508]: 2025-12-01 22:37:32.757 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:32 compute-0 nova_compute[189508]: 2025-12-01 22:37:32.758 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:32 compute-0 nova_compute[189508]: 2025-12-01 22:37:32.759 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:37:33 compute-0 nova_compute[189508]: 2025-12-01 22:37:33.717 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:37:33 compute-0 nova_compute[189508]: 2025-12-01 22:37:33.718 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:37:33 compute-0 nova_compute[189508]: 2025-12-01 22:37:33.719 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:37:34 compute-0 nova_compute[189508]: 2025-12-01 22:37:34.686 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.265 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.266 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.266 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.274 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.278 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ef18b98f-df89-44d0-9215-5c2e556e10be', 'name': 'vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.279 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:37:35.280151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.285 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.290 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets volume: 38 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.293 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:37:35.292917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.294 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.295 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.296 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.296 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:37:35.296119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:37:35.299144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.336 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.337 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.337 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.374 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.374 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.375 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:37:35.376798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.451 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.452 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.452 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.542 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.543 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.543 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.544 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.545 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.545 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.545 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.546 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.546 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.547 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 493804988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.547 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 100192430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.548 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 68791964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.549 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.549 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.549 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.550 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.550 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.550 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.550 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.551 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.552 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.552 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.552 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.552 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.552 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.552 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.553 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.553 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.553 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.554 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.554 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.555 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.555 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.555 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.555 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.556 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.556 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.557 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.558 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.558 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.558 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.559 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.559 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.560 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:37:35.545535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.561 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:37:35.549404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:37:35.552213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.561 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:37:35.554906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.562 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:37:35.557758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:37:35.561362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.562 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 2011182396 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.562 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 11549778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.563 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.564 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.564 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.564 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.564 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:37:35.564701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.611 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 35770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.637 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/cpu volume: 187370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.639 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.639 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.640 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:37:35.639583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.641 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.641 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.642 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:37:35.641742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.643 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:37:35.644261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.644 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.645 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.645 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.645 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.646 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.646 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.647 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:37:35.648489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.650 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.650 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.651 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.652 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:37:35.650485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.652 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.654 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:37:35.652829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.654 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.654 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.655 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:37:35.654674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.657 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:37:35.656659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.657 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.658 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.658 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.658 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:37:35.658484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.659 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes volume: 4586 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.659 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.660 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.660 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:37:35.660092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.661 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.661 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.661 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.661 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.661 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.662 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/memory.usage volume: 49.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.662 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:37:35.661774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.664 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:37:35.663928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.664 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.665 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.666 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.667 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:37:35.668 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:37:35 compute-0 podman[242283]: 2025-12-01 22:37:35.822730318 +0000 UTC m=+0.084834480 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, distribution-scope=public, config_id=edpm, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec  1 22:37:35 compute-0 podman[242281]: 2025-12-01 22:37:35.839119375 +0000 UTC m=+0.099221920 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  1 22:37:35 compute-0 podman[242280]: 2025-12-01 22:37:35.848183123 +0000 UTC m=+0.111419767 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:37:35 compute-0 podman[242282]: 2025-12-01 22:37:35.870830519 +0000 UTC m=+0.128633868 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.023 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.035 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.102 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.103 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.106 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.107 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.108 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.109 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.110 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.111 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.213 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.214 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.249 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.250 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.251 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.251 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.334 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.398 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.400 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.479 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.482 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.547 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.550 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.615 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.627 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.701 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.704 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.773 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.776 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.852 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.854 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:37:36 compute-0 nova_compute[189508]: 2025-12-01 22:37:36.932 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.376 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.379 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5006MB free_disk=72.18050765991211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.380 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.380 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.850 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.851 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.852 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.853 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.902 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.968 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.970 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:37:37 compute-0 nova_compute[189508]: 2025-12-01 22:37:37.991 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.012 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.100 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.122 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.124 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.124 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:38 compute-0 nova_compute[189508]: 2025-12-01 22:37:38.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 22:37:39 compute-0 nova_compute[189508]: 2025-12-01 22:37:39.690 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:41 compute-0 nova_compute[189508]: 2025-12-01 22:37:41.039 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:41 compute-0 nova_compute[189508]: 2025-12-01 22:37:41.218 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:37:41 compute-0 nova_compute[189508]: 2025-12-01 22:37:41.220 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 22:37:41 compute-0 nova_compute[189508]: 2025-12-01 22:37:41.237 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 22:37:44 compute-0 nova_compute[189508]: 2025-12-01 22:37:44.693 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:46 compute-0 nova_compute[189508]: 2025-12-01 22:37:46.043 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:49 compute-0 nova_compute[189508]: 2025-12-01 22:37:49.697 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:51 compute-0 nova_compute[189508]: 2025-12-01 22:37:51.048 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:51 compute-0 podman[242382]: 2025-12-01 22:37:51.855527325 +0000 UTC m=+0.129307357 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:37:54 compute-0 nova_compute[189508]: 2025-12-01 22:37:54.699 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:54 compute-0 podman[242406]: 2025-12-01 22:37:54.82719169 +0000 UTC m=+0.109800591 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  1 22:37:56 compute-0 nova_compute[189508]: 2025-12-01 22:37:56.053 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:56 compute-0 podman[242424]: 2025-12-01 22:37:56.848824644 +0000 UTC m=+0.119805286 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 22:37:59 compute-0 nova_compute[189508]: 2025-12-01 22:37:59.704 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:37:59 compute-0 podman[203693]: time="2025-12-01T22:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:37:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:37:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4779 "" "Go-http-client/1.1"
Dec  1 22:38:01 compute-0 nova_compute[189508]: 2025-12-01 22:38:01.058 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:01 compute-0 openstack_network_exporter[205887]: ERROR   22:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:38:01 compute-0 openstack_network_exporter[205887]: ERROR   22:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:38:01 compute-0 openstack_network_exporter[205887]: ERROR   22:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:38:01 compute-0 openstack_network_exporter[205887]: ERROR   22:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:38:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:38:01 compute-0 openstack_network_exporter[205887]: ERROR   22:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:38:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:38:01 compute-0 podman[242443]: 2025-12-01 22:38:01.874173707 +0000 UTC m=+0.141148725 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:38:01 compute-0 podman[242444]: 2025-12-01 22:38:01.891397248 +0000 UTC m=+0.149530114 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 22:38:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:38:04.612 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:38:04.612 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:38:04.613 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:04 compute-0 nova_compute[189508]: 2025-12-01 22:38:04.708 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:06 compute-0 nova_compute[189508]: 2025-12-01 22:38:06.063 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:06 compute-0 podman[242485]: 2025-12-01 22:38:06.850056178 +0000 UTC m=+0.112943070 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 22:38:06 compute-0 podman[242493]: 2025-12-01 22:38:06.854784113 +0000 UTC m=+0.096957404 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, distribution-scope=public, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Dec  1 22:38:06 compute-0 podman[242486]: 2025-12-01 22:38:06.85851943 +0000 UTC m=+0.111784428 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  1 22:38:06 compute-0 podman[242487]: 2025-12-01 22:38:06.865005154 +0000 UTC m=+0.118250001 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=edpm)
Dec  1 22:38:09 compute-0 nova_compute[189508]: 2025-12-01 22:38:09.712 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:11 compute-0 nova_compute[189508]: 2025-12-01 22:38:11.068 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:14 compute-0 nova_compute[189508]: 2025-12-01 22:38:14.714 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:16 compute-0 nova_compute[189508]: 2025-12-01 22:38:16.073 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:19 compute-0 nova_compute[189508]: 2025-12-01 22:38:19.716 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:21 compute-0 nova_compute[189508]: 2025-12-01 22:38:21.078 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:22 compute-0 podman[242562]: 2025-12-01 22:38:22.83805813 +0000 UTC m=+0.107955868 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:38:24 compute-0 nova_compute[189508]: 2025-12-01 22:38:24.720 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:25 compute-0 podman[242586]: 2025-12-01 22:38:25.901983794 +0000 UTC m=+0.166559449 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  1 22:38:26 compute-0 nova_compute[189508]: 2025-12-01 22:38:26.083 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:27 compute-0 podman[242606]: 2025-12-01 22:38:27.860042975 +0000 UTC m=+0.121887915 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:38:28 compute-0 nova_compute[189508]: 2025-12-01 22:38:28.215 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:29 compute-0 nova_compute[189508]: 2025-12-01 22:38:29.724 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:29 compute-0 podman[203693]: time="2025-12-01T22:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:38:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:38:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Dec  1 22:38:31 compute-0 nova_compute[189508]: 2025-12-01 22:38:31.087 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:31 compute-0 nova_compute[189508]: 2025-12-01 22:38:31.223 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:31 compute-0 openstack_network_exporter[205887]: ERROR   22:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:38:31 compute-0 openstack_network_exporter[205887]: ERROR   22:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:38:31 compute-0 openstack_network_exporter[205887]: ERROR   22:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:38:31 compute-0 openstack_network_exporter[205887]: ERROR   22:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:38:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:38:31 compute-0 openstack_network_exporter[205887]: ERROR   22:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:38:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:38:32 compute-0 nova_compute[189508]: 2025-12-01 22:38:32.290 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:32 compute-0 nova_compute[189508]: 2025-12-01 22:38:32.292 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:32 compute-0 podman[242628]: 2025-12-01 22:38:32.871986926 +0000 UTC m=+0.130001277 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 22:38:32 compute-0 podman[242627]: 2025-12-01 22:38:32.947704064 +0000 UTC m=+0.215645518 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  1 22:38:34 compute-0 nova_compute[189508]: 2025-12-01 22:38:34.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:34 compute-0 nova_compute[189508]: 2025-12-01 22:38:34.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:38:34 compute-0 nova_compute[189508]: 2025-12-01 22:38:34.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:38:34 compute-0 nova_compute[189508]: 2025-12-01 22:38:34.727 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:36 compute-0 nova_compute[189508]: 2025-12-01 22:38:36.037 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:38:36 compute-0 nova_compute[189508]: 2025-12-01 22:38:36.038 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:38:36 compute-0 nova_compute[189508]: 2025-12-01 22:38:36.039 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:38:36 compute-0 nova_compute[189508]: 2025-12-01 22:38:36.039 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:38:36 compute-0 nova_compute[189508]: 2025-12-01 22:38:36.092 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.277 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.292 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.293 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.294 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.294 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.294 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.295 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.295 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.333 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.335 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.336 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.337 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.469 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.577 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.580 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.686 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.689 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.789 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.791 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:37 compute-0 podman[242678]: 2025-12-01 22:38:37.836273589 +0000 UTC m=+0.102084011 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:38:37 compute-0 podman[242681]: 2025-12-01 22:38:37.859886962 +0000 UTC m=+0.120233468 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, architecture=x86_64, release=1755695350, maintainer=Red Hat, Inc.)
Dec  1 22:38:37 compute-0 podman[242682]: 2025-12-01 22:38:37.872885792 +0000 UTC m=+0.117259533 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc.)
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.877 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:37 compute-0 podman[242680]: 2025-12-01 22:38:37.884094622 +0000 UTC m=+0.140406733 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.886 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.953 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:37 compute-0 nova_compute[189508]: 2025-12-01 22:38:37.954 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.022 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.023 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.086 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.088 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.161 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.586 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.589 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5034MB free_disk=72.1805305480957GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.589 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.590 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.700 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.700 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.701 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.701 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.762 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.784 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.786 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:38:38 compute-0 nova_compute[189508]: 2025-12-01 22:38:38.787 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:39 compute-0 nova_compute[189508]: 2025-12-01 22:38:39.731 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:40 compute-0 nova_compute[189508]: 2025-12-01 22:38:40.692 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:38:41 compute-0 nova_compute[189508]: 2025-12-01 22:38:41.097 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:44 compute-0 nova_compute[189508]: 2025-12-01 22:38:44.734 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:46 compute-0 nova_compute[189508]: 2025-12-01 22:38:46.104 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:38:46.994 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:38:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:38:46.995 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:38:46 compute-0 nova_compute[189508]: 2025-12-01 22:38:46.997 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:49 compute-0 nova_compute[189508]: 2025-12-01 22:38:49.740 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:51 compute-0 nova_compute[189508]: 2025-12-01 22:38:51.111 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:53 compute-0 podman[242777]: 2025-12-01 22:38:53.831000711 +0000 UTC m=+0.096637216 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:38:54 compute-0 nova_compute[189508]: 2025-12-01 22:38:54.744 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:55 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:38:54.998 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.491 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.492 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.510 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.608 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.609 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.622 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.623 189512 INFO nova.compute.claims [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:38:55 compute-0 nova_compute[189508]: 2025-12-01 22:38:55.823 189512 DEBUG nova.compute.provider_tree [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.022 189512 DEBUG nova.scheduler.client.report [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.048 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.050 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.094 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.096 189512 DEBUG nova.network.neutron [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.116 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.120 189512 INFO nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.168 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.280 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.284 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.286 189512 INFO nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Creating image(s)#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.288 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.289 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.292 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.327 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.392 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.395 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.396 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.414 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.487 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.490 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.545 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.547 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.548 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.648 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.653 189512 DEBUG nova.virt.disk.api [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Checking if we can resize image /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.654 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.732 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.734 189512 DEBUG nova.virt.disk.api [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Cannot resize image /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.734 189512 DEBUG nova.objects.instance [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'migration_context' on Instance uuid 99b450eb-11ab-433d-9cf3-da58ea311e94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.749 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.750 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.751 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.763 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:56 compute-0 podman[242814]: 2025-12-01 22:38:56.804072665 +0000 UTC m=+0.085967452 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.837 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.839 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.840 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.853 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.917 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:56 compute-0 nova_compute[189508]: 2025-12-01 22:38:56.920 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.100 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 1073741824" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.102 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.104 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.198 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.201 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.202 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Ensure instance console log exists: /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.204 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.205 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:38:57 compute-0 nova_compute[189508]: 2025-12-01 22:38:57.206 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:38:58 compute-0 podman[242849]: 2025-12-01 22:38:58.839199405 +0000 UTC m=+0.108682269 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.141 189512 DEBUG nova.network.neutron [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Successfully updated port: 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.167 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.168 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquired lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.169 189512 DEBUG nova.network.neutron [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.242 189512 DEBUG nova.compute.manager [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received event network-changed-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.243 189512 DEBUG nova.compute.manager [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Refreshing instance network info cache due to event network-changed-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.244 189512 DEBUG oslo_concurrency.lockutils [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:38:59 compute-0 podman[203693]: time="2025-12-01T22:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:38:59 compute-0 nova_compute[189508]: 2025-12-01 22:38:59.757 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:38:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:38:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  1 22:39:00 compute-0 nova_compute[189508]: 2025-12-01 22:39:00.058 189512 DEBUG nova.network.neutron [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.121 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.174 189512 DEBUG nova.network.neutron [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updating instance_info_cache with network_info: [{"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.196 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Releasing lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.197 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Instance network_info: |[{"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.199 189512 DEBUG oslo_concurrency.lockutils [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.200 189512 DEBUG nova.network.neutron [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Refreshing network info cache for port 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.206 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Start _get_guest_xml network_info=[{"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}], 'ephemerals': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 1, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.221 189512 WARNING nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.229 189512 DEBUG nova.virt.libvirt.host [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.230 189512 DEBUG nova.virt.libvirt.host [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.242 189512 DEBUG nova.virt.libvirt.host [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.243 189512 DEBUG nova.virt.libvirt.host [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.244 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.245 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:30:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='aa9783c0-34c0-4a4d-bc86-59429edc9395',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.246 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.246 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.247 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.248 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.248 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.249 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.250 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.251 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.251 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.252 189512 DEBUG nova.virt.hardware [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.257 189512 DEBUG nova.virt.libvirt.vif [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi',id=3,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-8cy17cl9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:38:56Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDI0ODY2MTE2OTEwMTM1NDQzMz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 22:39:01 compute-0 nova_compute[189508]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDI0ODY2MTE2OTEwMTM1NDQzMz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=99b450eb-11ab-433d-9cf3-da58ea311e94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.258 189512 DEBUG nova.network.os_vif_util [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.259 189512 DEBUG nova.network.os_vif_util [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:6b:fb,bridge_name='br-int',has_traffic_filtering=True,id=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7e734aeb-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.261 189512 DEBUG nova.objects.instance [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99b450eb-11ab-433d-9cf3-da58ea311e94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.279 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <uuid>99b450eb-11ab-433d-9cf3-da58ea311e94</uuid>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <name>instance-00000003</name>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <memory>524288</memory>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <nova:name>vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi</nova:name>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:39:01</nova:creationTime>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <nova:flavor name="m1.small">
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:memory>512</nova:memory>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:user uuid="3b810e864d6c4d058e539f62ad181096">admin</nova:user>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:project uuid="af2fbf0e1b5f40c19aed69d241db7727">admin</nova:project>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="ca09b2c0-a624-4fb0-b624-b8d92d761f4a"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        <nova:port uuid="7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3">
Dec  1 22:39:01 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="192.168.0.11" ipVersion="4"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <system>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <entry name="serial">99b450eb-11ab-433d-9cf3-da58ea311e94</entry>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <entry name="uuid">99b450eb-11ab-433d-9cf3-da58ea311e94</entry>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </system>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <os>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  </os>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <features>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  </features>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <target dev="vdb" bus="virtio"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.config"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:b8:6b:fb"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <target dev="tap7e734aeb-82"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/console.log" append="off"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <video>
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </video>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:39:01 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:39:01 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:39:01 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:39:01 compute-0 nova_compute[189508]: </domain>
Dec  1 22:39:01 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.292 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Preparing to wait for external event network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.292 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.292 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.292 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.293 189512 DEBUG nova.virt.libvirt.vif [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi',id=3,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-8cy17cl9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:38:56Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDI0ODY2MTE2OTEwMTM1NDQzMz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.294 189512 DEBUG nova.network.os_vif_util [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.296 189512 DEBUG nova.network.os_vif_util [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:6b:fb,bridge_name='br-int',has_traffic_filtering=True,id=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7e734aeb-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.296 189512 DEBUG os_vif [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:6b:fb,bridge_name='br-int',has_traffic_filtering=True,id=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7e734aeb-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.297 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.297 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.298 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.303 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.304 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e734aeb-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.305 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e734aeb-82, col_values=(('external_ids', {'iface-id': '7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:6b:fb', 'vm-uuid': '99b450eb-11ab-433d-9cf3-da58ea311e94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.307 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:01 compute-0 NetworkManager[56278]: <info>  [1764628741.3097] manager: (tap7e734aeb-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.309 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.319 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.321 189512 INFO os_vif [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:6b:fb,bridge_name='br-int',has_traffic_filtering=True,id=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7e734aeb-82')#033[00m
Dec  1 22:39:01 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:39:01.257 189512 DEBUG nova.virt.libvirt.vif [None req-48775595-47 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.380 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.381 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.382 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.382 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No VIF found with MAC fa:16:3e:b8:6b:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:39:01 compute-0 nova_compute[189508]: 2025-12-01 22:39:01.383 189512 INFO nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Using config drive#033[00m
Dec  1 22:39:01 compute-0 openstack_network_exporter[205887]: ERROR   22:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:39:01 compute-0 openstack_network_exporter[205887]: ERROR   22:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:39:01 compute-0 openstack_network_exporter[205887]: ERROR   22:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:39:01 compute-0 openstack_network_exporter[205887]: ERROR   22:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:39:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:39:01 compute-0 openstack_network_exporter[205887]: ERROR   22:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:39:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.262 189512 INFO nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Creating config drive at /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.config#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.272 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxk91m8jm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.423 189512 DEBUG oslo_concurrency.processutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxk91m8jm" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:02 compute-0 kernel: tap7e734aeb-82: entered promiscuous mode
Dec  1 22:39:02 compute-0 NetworkManager[56278]: <info>  [1764628742.5235] manager: (tap7e734aeb-82): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  1 22:39:02 compute-0 ovn_controller[97770]: 2025-12-01T22:39:02Z|00040|binding|INFO|Claiming lport 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 for this chassis.
Dec  1 22:39:02 compute-0 ovn_controller[97770]: 2025-12-01T22:39:02Z|00041|binding|INFO|7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3: Claiming fa:16:3e:b8:6b:fb 192.168.0.11
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.534 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.547 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:6b:fb 192.168.0.11'], port_security=['fa:16:3e:b8:6b:fb 192.168.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-37pfkxggku2d-wifaxhcghats-izgcjuxscyy2-port-ncy6cathjcrw', 'neutron:cidrs': '192.168.0.11/24', 'neutron:device_id': '99b450eb-11ab-433d-9cf3-da58ea311e94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-37pfkxggku2d-wifaxhcghats-izgcjuxscyy2-port-ncy6cathjcrw', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.548 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c bound to our chassis#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.550 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c#033[00m
Dec  1 22:39:02 compute-0 ovn_controller[97770]: 2025-12-01T22:39:02Z|00042|binding|INFO|Setting lport 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 ovn-installed in OVS
Dec  1 22:39:02 compute-0 ovn_controller[97770]: 2025-12-01T22:39:02Z|00043|binding|INFO|Setting lport 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 up in Southbound
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.562 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.569 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.573 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[5e20870e-c3b1-437c-93f8-ab6340bf43da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:39:02 compute-0 systemd-machined[155759]: New machine qemu-3-instance-00000003.
Dec  1 22:39:02 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.619 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[d968c861-e3f7-49e9-8259-8cca0a79d38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.622 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ce175d7a-f1c3-4e69-ade7-75a5e01d6660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:39:02 compute-0 systemd-udevd[242896]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:39:02 compute-0 NetworkManager[56278]: <info>  [1764628742.6501] device (tap7e734aeb-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:39:02 compute-0 NetworkManager[56278]: <info>  [1764628742.6565] device (tap7e734aeb-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.674 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[66696327-08d1-4e48-97d4-20f94978a324]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.709 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[54e2346b-faaa-4421-ad2f-ad0e147bb4a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 38261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242903, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.738 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[3466b78c-943f-4169-ad18-f6dcc061e2c2]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384779, 'tstamp': 384779}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242908, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384784, 'tstamp': 384784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242908, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.740 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.742 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.744 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.744 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd6e3c27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.745 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.745 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd6e3c27-10, col_values=(('external_ids', {'iface-id': 'e303b09b-4673-4950-aa2d-91085a5bc5f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:39:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:02.746 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.939 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628742.9385588, 99b450eb-11ab-433d-9cf3-da58ea311e94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.940 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] VM Started (Lifecycle Event)#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.972 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.980 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628742.9389772, 99b450eb-11ab-433d-9cf3-da58ea311e94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:39:02 compute-0 nova_compute[189508]: 2025-12-01 22:39:02.980 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.001 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.008 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.036 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:39:03 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 22:39:03 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 22:39:03 compute-0 podman[242917]: 2025-12-01 22:39:03.255408333 +0000 UTC m=+0.093877186 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.271 189512 DEBUG nova.compute.manager [req-ac1bc113-4b96-4c6e-970b-a7d6cf96ff50 req-6c02b782-8911-4e9b-b30a-943e34f6a53c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received event network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.271 189512 DEBUG oslo_concurrency.lockutils [req-ac1bc113-4b96-4c6e-970b-a7d6cf96ff50 req-6c02b782-8911-4e9b-b30a-943e34f6a53c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.272 189512 DEBUG oslo_concurrency.lockutils [req-ac1bc113-4b96-4c6e-970b-a7d6cf96ff50 req-6c02b782-8911-4e9b-b30a-943e34f6a53c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.272 189512 DEBUG oslo_concurrency.lockutils [req-ac1bc113-4b96-4c6e-970b-a7d6cf96ff50 req-6c02b782-8911-4e9b-b30a-943e34f6a53c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.272 189512 DEBUG nova.compute.manager [req-ac1bc113-4b96-4c6e-970b-a7d6cf96ff50 req-6c02b782-8911-4e9b-b30a-943e34f6a53c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Processing event network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.273 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.278 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628743.277251, 99b450eb-11ab-433d-9cf3-da58ea311e94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.278 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.281 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.286 189512 INFO nova.virt.libvirt.driver [-] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Instance spawned successfully.#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.287 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.325 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.333 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.333 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.334 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.335 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.335 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.336 189512 DEBUG nova.virt.libvirt.driver [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.345 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:39:03 compute-0 podman[242916]: 2025-12-01 22:39:03.360244721 +0000 UTC m=+0.198689423 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.381 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.416 189512 INFO nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Took 7.13 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.417 189512 DEBUG nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.505 189512 INFO nova.compute.manager [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Took 7.94 seconds to build instance.#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.541 189512 DEBUG oslo_concurrency.lockutils [None req-48775595-47ab-4a0b-9f35-624a69ad9fe8 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.592 189512 DEBUG nova.network.neutron [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updated VIF entry in instance network info cache for port 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.593 189512 DEBUG nova.network.neutron [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updating instance_info_cache with network_info: [{"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:39:03 compute-0 nova_compute[189508]: 2025-12-01 22:39:03.609 189512 DEBUG oslo_concurrency.lockutils [req-6c80514d-136c-4bc0-a1ea-b0f4a4b00de5 req-ed32d5ee-02bf-49c4-8fd0-2484360a1cde c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:39:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:04.613 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:39:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:04.614 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:39:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:39:04.615 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:39:04 compute-0 nova_compute[189508]: 2025-12-01 22:39:04.751 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:05 compute-0 nova_compute[189508]: 2025-12-01 22:39:05.456 189512 DEBUG nova.compute.manager [req-5a769311-0ca1-43bb-98c5-016ed79c65bd req-368669d3-6a2b-4622-b8d4-411e3a222bb0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received event network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:39:05 compute-0 nova_compute[189508]: 2025-12-01 22:39:05.457 189512 DEBUG oslo_concurrency.lockutils [req-5a769311-0ca1-43bb-98c5-016ed79c65bd req-368669d3-6a2b-4622-b8d4-411e3a222bb0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:39:05 compute-0 nova_compute[189508]: 2025-12-01 22:39:05.457 189512 DEBUG oslo_concurrency.lockutils [req-5a769311-0ca1-43bb-98c5-016ed79c65bd req-368669d3-6a2b-4622-b8d4-411e3a222bb0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:39:05 compute-0 nova_compute[189508]: 2025-12-01 22:39:05.457 189512 DEBUG oslo_concurrency.lockutils [req-5a769311-0ca1-43bb-98c5-016ed79c65bd req-368669d3-6a2b-4622-b8d4-411e3a222bb0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:39:05 compute-0 nova_compute[189508]: 2025-12-01 22:39:05.458 189512 DEBUG nova.compute.manager [req-5a769311-0ca1-43bb-98c5-016ed79c65bd req-368669d3-6a2b-4622-b8d4-411e3a222bb0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] No waiting events found dispatching network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:39:05 compute-0 nova_compute[189508]: 2025-12-01 22:39:05.458 189512 WARNING nova.compute.manager [req-5a769311-0ca1-43bb-98c5-016ed79c65bd req-368669d3-6a2b-4622-b8d4-411e3a222bb0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received unexpected event network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:39:06 compute-0 nova_compute[189508]: 2025-12-01 22:39:06.308 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:08 compute-0 podman[242978]: 2025-12-01 22:39:08.852758519 +0000 UTC m=+0.103325026 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Dec  1 22:39:08 compute-0 podman[242979]: 2025-12-01 22:39:08.864155583 +0000 UTC m=+0.103014648 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, name=ubi9, container_name=kepler, distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=, config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container)
Dec  1 22:39:08 compute-0 podman[242977]: 2025-12-01 22:39:08.874195181 +0000 UTC m=+0.124397338 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:39:08 compute-0 podman[242976]: 2025-12-01 22:39:08.89063876 +0000 UTC m=+0.147992518 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:39:09 compute-0 nova_compute[189508]: 2025-12-01 22:39:09.755 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:11 compute-0 nova_compute[189508]: 2025-12-01 22:39:11.311 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:14 compute-0 nova_compute[189508]: 2025-12-01 22:39:14.759 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:16 compute-0 nova_compute[189508]: 2025-12-01 22:39:16.315 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:19 compute-0 nova_compute[189508]: 2025-12-01 22:39:19.763 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:21 compute-0 nova_compute[189508]: 2025-12-01 22:39:21.319 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:24 compute-0 nova_compute[189508]: 2025-12-01 22:39:24.767 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:24 compute-0 podman[243055]: 2025-12-01 22:39:24.850249847 +0000 UTC m=+0.119646972 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:39:26 compute-0 nova_compute[189508]: 2025-12-01 22:39:26.322 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:27 compute-0 podman[243076]: 2025-12-01 22:39:27.864656197 +0000 UTC m=+0.134155498 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:39:29 compute-0 podman[203693]: time="2025-12-01T22:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:39:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:39:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec  1 22:39:29 compute-0 nova_compute[189508]: 2025-12-01 22:39:29.771 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:29 compute-0 podman[243095]: 2025-12-01 22:39:29.864038537 +0000 UTC m=+0.134180479 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Dec  1 22:39:31 compute-0 nova_compute[189508]: 2025-12-01 22:39:31.325 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:31 compute-0 openstack_network_exporter[205887]: ERROR   22:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:39:31 compute-0 openstack_network_exporter[205887]: ERROR   22:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:39:31 compute-0 openstack_network_exporter[205887]: ERROR   22:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:39:31 compute-0 openstack_network_exporter[205887]: ERROR   22:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:39:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:39:31 compute-0 openstack_network_exporter[205887]: ERROR   22:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:39:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:39:32 compute-0 nova_compute[189508]: 2025-12-01 22:39:32.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:32 compute-0 ovn_controller[97770]: 2025-12-01T22:39:32Z|00044|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec  1 22:39:33 compute-0 nova_compute[189508]: 2025-12-01 22:39:33.196 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:33 compute-0 podman[243115]: 2025-12-01 22:39:33.808757527 +0000 UTC m=+0.074397375 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 22:39:33 compute-0 podman[243114]: 2025-12-01 22:39:33.860074106 +0000 UTC m=+0.134512679 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec  1 22:39:34 compute-0 nova_compute[189508]: 2025-12-01 22:39:34.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:34 compute-0 nova_compute[189508]: 2025-12-01 22:39:34.774 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:35 compute-0 nova_compute[189508]: 2025-12-01 22:39:35.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:35 compute-0 nova_compute[189508]: 2025-12-01 22:39:35.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.266 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.267 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.267 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.281 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.300 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 99b450eb-11ab-433d-9cf3-da58ea311e94 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:39:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:35.303 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/99b450eb-11ab-433d-9cf3-da58ea311e94 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:39:36 compute-0 ovn_controller[97770]: 2025-12-01T22:39:36Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:6b:fb 192.168.0.11
Dec  1 22:39:36 compute-0 ovn_controller[97770]: 2025-12-01T22:39:36Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:6b:fb 192.168.0.11
Dec  1 22:39:36 compute-0 nova_compute[189508]: 2025-12-01 22:39:36.329 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:36 compute-0 nova_compute[189508]: 2025-12-01 22:39:36.561 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:39:36 compute-0 nova_compute[189508]: 2025-12-01 22:39:36.562 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:39:36 compute-0 nova_compute[189508]: 2025-12-01 22:39:36.562 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.722 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 01 Dec 2025 22:39:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8e00ad9e-31db-4faa-b3a4-51224fc975e6 x-openstack-request-id: req-8e00ad9e-31db-4faa-b3a4-51224fc975e6 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.722 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "99b450eb-11ab-433d-9cf3-da58ea311e94", "name": "vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi", "status": "ACTIVE", "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "user_id": "3b810e864d6c4d058e539f62ad181096", "metadata": {"metering.server_group": "40d7879f-33f5-4fcb-8784-d9088730e18f"}, "hostId": "968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d", "image": {"id": "ca09b2c0-a624-4fb0-b624-b8d92d761f4a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ca09b2c0-a624-4fb0-b624-b8d92d761f4a"}]}, "flavor": {"id": "aa9783c0-34c0-4a4d-bc86-59429edc9395", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aa9783c0-34c0-4a4d-bc86-59429edc9395"}]}, "created": "2025-12-01T22:38:51Z", "updated": "2025-12-01T22:39:03Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b8:6b:fb"}, {"version": 4, "addr": "192.168.122.174", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b8:6b:fb"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/99b450eb-11ab-433d-9cf3-da58ea311e94"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/99b450eb-11ab-433d-9cf3-da58ea311e94"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T22:39:03.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.723 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/99b450eb-11ab-433d-9cf3-da58ea311e94 used request id req-8e00ad9e-31db-4faa-b3a4-51224fc975e6 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.727 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '99b450eb-11ab-433d-9cf3-da58ea311e94', 'name': 'vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.734 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ef18b98f-df89-44d0-9215-5c2e556e10be', 'name': 'vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:39:36.736440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.748 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.755 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 99b450eb-11ab-433d-9cf3-da58ea311e94 / tap7e734aeb-82 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.756 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets volume: 6 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.764 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.765 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.765 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.765 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.766 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.767 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:39:36.765959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.767 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.768 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.769 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.769 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.770 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:39:36.770338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.771 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.772 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.772 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.773 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.774 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:39:36.773819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.821 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.821 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.822 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.863 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.864 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.865 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.911 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.912 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.913 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.915 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.915 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:39:36.916855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.995 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.996 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:36.997 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.111 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 22674432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.112 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 2204160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.112 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 328014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.218 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.218 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.219 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.222 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.222 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.222 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.223 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.224 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:39:37.222477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.224 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.225 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 499691251 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.226 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 81055495 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.226 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 52289885 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.227 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 493804988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.227 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 100192430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.228 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 68791964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.230 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.232 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:39:37.231190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.232 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.233 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.233 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 21241856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.234 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.234 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.235 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.236 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.236 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.238 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.240 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:39:37.239865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.241 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.242 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.242 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.243 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.244 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.244 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.245 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.245 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.246 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.247 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.248 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:39:37.247882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.248 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.250 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.250 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.251 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 21037056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.252 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.252 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.253 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.254 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.254 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.255 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.256 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.256 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.256 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:39:37.257093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.258 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.258 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.259 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.260 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.260 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.261 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.261 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.261 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.261 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.262 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.262 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.263 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.263 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.263 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:39:37.262743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.263 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 1738374865 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.263 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 11037405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.264 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.264 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 2011182396 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.264 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 11549778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.264 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.265 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:39:37.265374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.300 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 37730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.331 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/cpu volume: 32050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.352 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/cpu volume: 308710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.354 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.355 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:39:37.354479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.355 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.356 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.357 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.358 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:39:37.357746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.359 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.359 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.360 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.361 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:39:37.361133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.361 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.362 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.362 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.363 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.363 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.364 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.364 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.364 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.365 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.367 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T22:39:37.366462) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.367 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi>]
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:39:37.368948) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.370 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.372 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:39:37.371238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.372 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.372 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.373 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:39:37.374385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.376 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.377 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:39:37.376897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.378 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.378 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.379 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.379 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.380 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:39:37.379914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.381 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.381 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.384 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:39:37.383554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.384 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes volume: 1073 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.385 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes volume: 4656 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.386 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.387 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:39:37.386831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.388 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.388 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.390 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.390 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.390 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.391 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.78515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:39:37.390893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.392 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/memory.usage volume: 33.30859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.392 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/memory.usage volume: 49.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.393 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.394 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T22:39:37.393823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.394 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi>]
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.395 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.397 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:39:37.396392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.397 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes volume: 1388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.398 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.402 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:39:37.403 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.258 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.284 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.285 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.285 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.286 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.286 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.287 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.287 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.318 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.319 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.320 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.320 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.445 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:39 compute-0 podman[243172]: 2025-12-01 22:39:39.50511817 +0000 UTC m=+0.102580787 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:39:39 compute-0 podman[243171]: 2025-12-01 22:39:39.502916261 +0000 UTC m=+0.106587063 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.507 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.509 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:39 compute-0 podman[243173]: 2025-12-01 22:39:39.524026264 +0000 UTC m=+0.126684230 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 22:39:39 compute-0 podman[243174]: 2025-12-01 22:39:39.521798345 +0000 UTC m=+0.115129372 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.581 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.582 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.658 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.660 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.728 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.737 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.778 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.800 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.801 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.863 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.865 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.930 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:39 compute-0 nova_compute[189508]: 2025-12-01 22:39:39.932 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.010 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.027 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.126 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.129 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.238 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.241 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.316 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.318 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.393 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.938 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.940 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4837MB free_disk=72.15798950195312GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.941 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:39:40 compute-0 nova_compute[189508]: 2025-12-01 22:39:40.941 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.034 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.034 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.034 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 99b450eb-11ab-433d-9cf3-da58ea311e94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.035 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.035 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.128 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.143 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.171 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.172 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:39:41 compute-0 nova_compute[189508]: 2025-12-01 22:39:41.332 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:43 compute-0 nova_compute[189508]: 2025-12-01 22:39:43.085 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:39:44 compute-0 nova_compute[189508]: 2025-12-01 22:39:44.782 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:46 compute-0 nova_compute[189508]: 2025-12-01 22:39:46.334 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:49 compute-0 nova_compute[189508]: 2025-12-01 22:39:49.787 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:51 compute-0 nova_compute[189508]: 2025-12-01 22:39:51.336 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:54 compute-0 nova_compute[189508]: 2025-12-01 22:39:54.791 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:55 compute-0 podman[243285]: 2025-12-01 22:39:55.867619452 +0000 UTC m=+0.131264552 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:39:56 compute-0 nova_compute[189508]: 2025-12-01 22:39:56.340 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:39:58 compute-0 podman[243310]: 2025-12-01 22:39:58.806406854 +0000 UTC m=+0.086775305 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:39:59 compute-0 podman[203693]: time="2025-12-01T22:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:39:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:39:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Dec  1 22:39:59 compute-0 nova_compute[189508]: 2025-12-01 22:39:59.795 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:00 compute-0 podman[243331]: 2025-12-01 22:40:00.850744133 +0000 UTC m=+0.120537505 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 22:40:01 compute-0 nova_compute[189508]: 2025-12-01 22:40:01.343 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:01 compute-0 openstack_network_exporter[205887]: ERROR   22:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:40:01 compute-0 openstack_network_exporter[205887]: ERROR   22:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:40:01 compute-0 openstack_network_exporter[205887]: ERROR   22:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:40:01 compute-0 openstack_network_exporter[205887]: ERROR   22:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:40:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:40:01 compute-0 openstack_network_exporter[205887]: ERROR   22:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:40:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:40:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:04.614 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:04.615 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:04.615 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:04 compute-0 nova_compute[189508]: 2025-12-01 22:40:04.800 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:04 compute-0 podman[243354]: 2025-12-01 22:40:04.839907898 +0000 UTC m=+0.109128131 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 22:40:04 compute-0 podman[243353]: 2025-12-01 22:40:04.923755955 +0000 UTC m=+0.187641166 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 22:40:06 compute-0 nova_compute[189508]: 2025-12-01 22:40:06.345 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:09 compute-0 nova_compute[189508]: 2025-12-01 22:40:09.802 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:09 compute-0 podman[243394]: 2025-12-01 22:40:09.835179135 +0000 UTC m=+0.098415576 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:40:09 compute-0 podman[243397]: 2025-12-01 22:40:09.861680592 +0000 UTC m=+0.105994718 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, container_name=kepler, release-0.7.12=, version=9.4, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9)
Dec  1 22:40:09 compute-0 podman[243395]: 2025-12-01 22:40:09.878961993 +0000 UTC m=+0.133231735 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 22:40:09 compute-0 podman[243396]: 2025-12-01 22:40:09.883907184 +0000 UTC m=+0.133142541 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter)
Dec  1 22:40:11 compute-0 nova_compute[189508]: 2025-12-01 22:40:11.349 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:14 compute-0 nova_compute[189508]: 2025-12-01 22:40:14.806 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:16 compute-0 nova_compute[189508]: 2025-12-01 22:40:16.353 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:19 compute-0 nova_compute[189508]: 2025-12-01 22:40:19.811 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:21 compute-0 nova_compute[189508]: 2025-12-01 22:40:21.356 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:24 compute-0 nova_compute[189508]: 2025-12-01 22:40:24.814 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:26 compute-0 nova_compute[189508]: 2025-12-01 22:40:26.360 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:26 compute-0 podman[243472]: 2025-12-01 22:40:26.863836943 +0000 UTC m=+0.134320693 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:40:29 compute-0 podman[203693]: time="2025-12-01T22:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:40:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:40:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  1 22:40:29 compute-0 nova_compute[189508]: 2025-12-01 22:40:29.821 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:29 compute-0 podman[243495]: 2025-12-01 22:40:29.888799754 +0000 UTC m=+0.155031936 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec  1 22:40:31 compute-0 nova_compute[189508]: 2025-12-01 22:40:31.196 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:31 compute-0 nova_compute[189508]: 2025-12-01 22:40:31.364 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:31 compute-0 openstack_network_exporter[205887]: ERROR   22:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:40:31 compute-0 openstack_network_exporter[205887]: ERROR   22:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:40:31 compute-0 openstack_network_exporter[205887]: ERROR   22:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:40:31 compute-0 openstack_network_exporter[205887]: ERROR   22:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:40:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:40:31 compute-0 openstack_network_exporter[205887]: ERROR   22:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:40:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:40:31 compute-0 podman[243512]: 2025-12-01 22:40:31.862402787 +0000 UTC m=+0.125930829 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 22:40:32 compute-0 nova_compute[189508]: 2025-12-01 22:40:32.203 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:34 compute-0 nova_compute[189508]: 2025-12-01 22:40:34.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:34 compute-0 nova_compute[189508]: 2025-12-01 22:40:34.825 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:35 compute-0 nova_compute[189508]: 2025-12-01 22:40:35.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:35 compute-0 nova_compute[189508]: 2025-12-01 22:40:35.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:40:35 compute-0 nova_compute[189508]: 2025-12-01 22:40:35.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:40:35 compute-0 nova_compute[189508]: 2025-12-01 22:40:35.458 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:40:35 compute-0 nova_compute[189508]: 2025-12-01 22:40:35.459 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:40:35 compute-0 nova_compute[189508]: 2025-12-01 22:40:35.461 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:40:35 compute-0 nova_compute[189508]: 2025-12-01 22:40:35.461 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:40:35 compute-0 podman[243532]: 2025-12-01 22:40:35.853535785 +0000 UTC m=+0.121063610 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 22:40:35 compute-0 podman[243531]: 2025-12-01 22:40:35.870744904 +0000 UTC m=+0.151426659 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:40:36 compute-0 nova_compute[189508]: 2025-12-01 22:40:36.367 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:36 compute-0 nova_compute[189508]: 2025-12-01 22:40:36.404 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:40:36 compute-0 nova_compute[189508]: 2025-12-01 22:40:36.423 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:40:36 compute-0 nova_compute[189508]: 2025-12-01 22:40:36.423 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:40:36 compute-0 nova_compute[189508]: 2025-12-01 22:40:36.424 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:36 compute-0 nova_compute[189508]: 2025-12-01 22:40:36.424 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:36 compute-0 nova_compute[189508]: 2025-12-01 22:40:36.425 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.242 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.243 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.244 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.245 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.365 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.463 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.466 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.547 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.550 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.648 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.650 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.720 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.729 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.791 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.793 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.851 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.852 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.908 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.909 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.966 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:37 compute-0 nova_compute[189508]: 2025-12-01 22:40:37.973 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.034 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.036 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.126 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.128 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.199 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.201 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.294 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.677 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.680 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4787MB free_disk=72.15759658813477GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.682 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.682 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.793 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.794 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.795 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 99b450eb-11ab-433d-9cf3-da58ea311e94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.796 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.796 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.909 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.925 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.928 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:40:38 compute-0 nova_compute[189508]: 2025-12-01 22:40:38.929 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:39 compute-0 nova_compute[189508]: 2025-12-01 22:40:39.829 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:39 compute-0 nova_compute[189508]: 2025-12-01 22:40:39.930 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:39 compute-0 nova_compute[189508]: 2025-12-01 22:40:39.931 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:40 compute-0 podman[243619]: 2025-12-01 22:40:40.844558347 +0000 UTC m=+0.108180516 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:40:40 compute-0 podman[243622]: 2025-12-01 22:40:40.873035667 +0000 UTC m=+0.116368315 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=)
Dec  1 22:40:40 compute-0 podman[243620]: 2025-12-01 22:40:40.872991355 +0000 UTC m=+0.124488161 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:40:40 compute-0 podman[243621]: 2025-12-01 22:40:40.906003586 +0000 UTC m=+0.150250438 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 22:40:41 compute-0 nova_compute[189508]: 2025-12-01 22:40:41.371 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:42 compute-0 nova_compute[189508]: 2025-12-01 22:40:42.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:40:43 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:43.573 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:40:43 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:43.574 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:40:43 compute-0 nova_compute[189508]: 2025-12-01 22:40:43.576 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:44 compute-0 nova_compute[189508]: 2025-12-01 22:40:44.833 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:46 compute-0 nova_compute[189508]: 2025-12-01 22:40:46.373 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:48 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:48.579 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:40:48 compute-0 nova_compute[189508]: 2025-12-01 22:40:48.972 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:48 compute-0 nova_compute[189508]: 2025-12-01 22:40:48.974 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:48 compute-0 nova_compute[189508]: 2025-12-01 22:40:48.994 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.077 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.077 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.089 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.090 189512 INFO nova.compute.claims [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.294 189512 DEBUG nova.compute.provider_tree [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.310 189512 DEBUG nova.scheduler.client.report [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.330 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.332 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.374 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.375 189512 DEBUG nova.network.neutron [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.398 189512 INFO nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.435 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.551 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.554 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.555 189512 INFO nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Creating image(s)#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.555 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.556 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.557 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.575 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.674 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.676 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.677 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.700 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.789 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.791 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.836 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.850 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e,backing_fmt=raw /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk 1073741824" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.851 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "9c3ca1997acb58c7aa0cee513cca827b62b8612e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.853 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.920 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.922 189512 DEBUG nova.virt.disk.api [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Checking if we can resize image /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:40:49 compute-0 nova_compute[189508]: 2025-12-01 22:40:49.924 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.024 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.027 189512 DEBUG nova.virt.disk.api [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Cannot resize image /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.030 189512 DEBUG nova.objects.instance [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'migration_context' on Instance uuid dae82663-6de4-4397-8aab-9559ddeaec24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.052 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.053 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.055 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.088 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.182 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.185 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.186 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.202 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.287 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.289 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.359 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 1073741824" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.362 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.364 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.459 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.463 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.464 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Ensure instance console log exists: /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.465 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.467 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:50 compute-0 nova_compute[189508]: 2025-12-01 22:40:50.468 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:51 compute-0 nova_compute[189508]: 2025-12-01 22:40:51.377 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:52 compute-0 nova_compute[189508]: 2025-12-01 22:40:52.354 189512 DEBUG nova.network.neutron [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Successfully updated port: d4f1e6ff-9498-4994-811a-29c1f1b406a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:40:52 compute-0 nova_compute[189508]: 2025-12-01 22:40:52.371 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:40:52 compute-0 nova_compute[189508]: 2025-12-01 22:40:52.371 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquired lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:40:52 compute-0 nova_compute[189508]: 2025-12-01 22:40:52.372 189512 DEBUG nova.network.neutron [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:40:52 compute-0 nova_compute[189508]: 2025-12-01 22:40:52.448 189512 DEBUG nova.compute.manager [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received event network-changed-d4f1e6ff-9498-4994-811a-29c1f1b406a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:40:52 compute-0 nova_compute[189508]: 2025-12-01 22:40:52.449 189512 DEBUG nova.compute.manager [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Refreshing instance network info cache due to event network-changed-d4f1e6ff-9498-4994-811a-29c1f1b406a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:40:52 compute-0 nova_compute[189508]: 2025-12-01 22:40:52.450 189512 DEBUG oslo_concurrency.lockutils [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:40:53 compute-0 nova_compute[189508]: 2025-12-01 22:40:53.132 189512 DEBUG nova.network.neutron [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:40:54 compute-0 nova_compute[189508]: 2025-12-01 22:40:54.841 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.151 189512 DEBUG nova.network.neutron [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updating instance_info_cache with network_info: [{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.184 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Releasing lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.186 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Instance network_info: |[{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.187 189512 DEBUG oslo_concurrency.lockutils [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.189 189512 DEBUG nova.network.neutron [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Refreshing network info cache for port d4f1e6ff-9498-4994-811a-29c1f1b406a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.195 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Start _get_guest_xml network_info=[{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}], 'ephemerals': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 1, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.210 189512 WARNING nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.225 189512 DEBUG nova.virt.libvirt.host [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.226 189512 DEBUG nova.virt.libvirt.host [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.235 189512 DEBUG nova.virt.libvirt.host [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.237 189512 DEBUG nova.virt.libvirt.host [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.239 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.240 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:30:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='aa9783c0-34c0-4a4d-bc86-59429edc9395',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:30:45Z,direct_url=<?>,disk_format='qcow2',id=ca09b2c0-a624-4fb0-b624-b8d92d761f4a,min_disk=0,min_ram=0,name='cirros',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:30:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.242 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.243 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.244 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.245 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.246 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.247 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.248 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.249 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.250 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.252 189512 DEBUG nova.virt.hardware [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.258 189512 DEBUG nova.virt.libvirt.vif [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:40:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u',id=4,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-qucg0bnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:40:49Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDUzMzI1ODk2MzEwMzYxNjE1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 22:40:55 compute-0 nova_compute[189508]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDUzMzI1ODk2MzEwMzYxNjE1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=dae82663-6de4-4397-8aab-9559ddeaec24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.259 189512 DEBUG nova.network.os_vif_util [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.260 189512 DEBUG nova.network.os_vif_util [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f6:49,bridge_name='br-int',has_traffic_filtering=True,id=d4f1e6ff-9498-4994-811a-29c1f1b406a3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd4f1e6ff-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.263 189512 DEBUG nova.objects.instance [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'pci_devices' on Instance uuid dae82663-6de4-4397-8aab-9559ddeaec24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.285 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <uuid>dae82663-6de4-4397-8aab-9559ddeaec24</uuid>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <name>instance-00000004</name>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <memory>524288</memory>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <nova:name>vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u</nova:name>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:40:55</nova:creationTime>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <nova:flavor name="m1.small">
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:memory>512</nova:memory>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:user uuid="3b810e864d6c4d058e539f62ad181096">admin</nova:user>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:project uuid="af2fbf0e1b5f40c19aed69d241db7727">admin</nova:project>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="ca09b2c0-a624-4fb0-b624-b8d92d761f4a"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        <nova:port uuid="d4f1e6ff-9498-4994-811a-29c1f1b406a3">
Dec  1 22:40:55 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="192.168.0.51" ipVersion="4"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <system>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <entry name="serial">dae82663-6de4-4397-8aab-9559ddeaec24</entry>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <entry name="uuid">dae82663-6de4-4397-8aab-9559ddeaec24</entry>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </system>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <os>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  </os>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <features>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  </features>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <target dev="vdb" bus="virtio"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.config"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:a3:f6:49"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <target dev="tapd4f1e6ff-94"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/console.log" append="off"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <video>
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </video>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:40:55 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:40:55 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:40:55 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:40:55 compute-0 nova_compute[189508]: </domain>
Dec  1 22:40:55 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.287 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Preparing to wait for external event network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.287 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.288 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.288 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.289 189512 DEBUG nova.virt.libvirt.vif [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:40:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u',id=4,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-qucg0bnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:40:49Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDUzMzI1ODk2MzEwMzYxNjE1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 22:40:55 compute-0 nova_compute[189508]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDUzMzI1ODk2MzEwMzYxNjE1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=dae82663-6de4-4397-8aab-9559ddeaec24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.290 189512 DEBUG nova.network.os_vif_util [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.291 189512 DEBUG nova.network.os_vif_util [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f6:49,bridge_name='br-int',has_traffic_filtering=True,id=d4f1e6ff-9498-4994-811a-29c1f1b406a3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd4f1e6ff-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.292 189512 DEBUG os_vif [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f6:49,bridge_name='br-int',has_traffic_filtering=True,id=d4f1e6ff-9498-4994-811a-29c1f1b406a3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd4f1e6ff-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.292 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.293 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.294 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.300 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.301 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4f1e6ff-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.303 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4f1e6ff-94, col_values=(('external_ids', {'iface-id': 'd4f1e6ff-9498-4994-811a-29c1f1b406a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:f6:49', 'vm-uuid': 'dae82663-6de4-4397-8aab-9559ddeaec24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:40:55 compute-0 NetworkManager[56278]: <info>  [1764628855.3079] manager: (tapd4f1e6ff-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.306 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.310 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.322 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.325 189512 INFO os_vif [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:f6:49,bridge_name='br-int',has_traffic_filtering=True,id=d4f1e6ff-9498-4994-811a-29c1f1b406a3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd4f1e6ff-94')#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.403 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.404 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.404 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.405 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No VIF found with MAC fa:16:3e:a3:f6:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:40:55 compute-0 nova_compute[189508]: 2025-12-01 22:40:55.405 189512 INFO nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Using config drive#033[00m
Dec  1 22:40:55 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:40:55.258 189512 DEBUG nova.virt.libvirt.vif [None req-49f4896d-c1 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:40:55 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:40:55.289 189512 DEBUG nova.virt.libvirt.vif [None req-49f4896d-c1 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.138 189512 INFO nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Creating config drive at /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.config#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.150 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8vxb2awt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.306 189512 DEBUG oslo_concurrency.processutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8vxb2awt" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:40:56 compute-0 kernel: tapd4f1e6ff-94: entered promiscuous mode
Dec  1 22:40:56 compute-0 NetworkManager[56278]: <info>  [1764628856.4407] manager: (tapd4f1e6ff-94): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  1 22:40:56 compute-0 ovn_controller[97770]: 2025-12-01T22:40:56Z|00045|binding|INFO|Claiming lport d4f1e6ff-9498-4994-811a-29c1f1b406a3 for this chassis.
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.448 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:56 compute-0 ovn_controller[97770]: 2025-12-01T22:40:56Z|00046|binding|INFO|d4f1e6ff-9498-4994-811a-29c1f1b406a3: Claiming fa:16:3e:a3:f6:49 192.168.0.51
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.465 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:f6:49 192.168.0.51'], port_security=['fa:16:3e:a3:f6:49 192.168.0.51'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-37pfkxggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-port-gnvnsxaqfbgg', 'neutron:cidrs': '192.168.0.51/24', 'neutron:device_id': 'dae82663-6de4-4397-8aab-9559ddeaec24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-37pfkxggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-port-gnvnsxaqfbgg', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=d4f1e6ff-9498-4994-811a-29c1f1b406a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.467 106662 INFO neutron.agent.ovn.metadata.agent [-] Port d4f1e6ff-9498-4994-811a-29c1f1b406a3 in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c bound to our chassis#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.469 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c#033[00m
Dec  1 22:40:56 compute-0 ovn_controller[97770]: 2025-12-01T22:40:56Z|00047|binding|INFO|Setting lport d4f1e6ff-9498-4994-811a-29c1f1b406a3 ovn-installed in OVS
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.496 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:56 compute-0 ovn_controller[97770]: 2025-12-01T22:40:56Z|00048|binding|INFO|Setting lport d4f1e6ff-9498-4994-811a-29c1f1b406a3 up in Southbound
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.499 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.503 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9efeb0-338f-4862-8d93-dfe63b1520b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:40:56 compute-0 systemd-udevd[243751]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:40:56 compute-0 systemd-machined[155759]: New machine qemu-4-instance-00000004.
Dec  1 22:40:56 compute-0 NetworkManager[56278]: <info>  [1764628856.5344] device (tapd4f1e6ff-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:40:56 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  1 22:40:56 compute-0 NetworkManager[56278]: <info>  [1764628856.5380] device (tapd4f1e6ff-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.573 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[85be87d0-cd11-40ca-acc4-b112324d5070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.578 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ead1adb6-0c59-498d-bc74-569216565e71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.626 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[3868e4f7-f75d-4fa5-9815-266fe38c0912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.655 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2d1fab-9cdf-4251-ada1-86083b0a0804]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 30718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243763, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.679 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8a519b98-3ccd-437e-9633-225b1a3d1f72]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384779, 'tstamp': 384779}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243765, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384784, 'tstamp': 384784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243765, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.681 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.683 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.685 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.686 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd6e3c27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.686 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.686 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd6e3c27-10, col_values=(('external_ids', {'iface-id': 'e303b09b-4673-4950-aa2d-91085a5bc5f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:40:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:40:56.687 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.763 189512 DEBUG nova.compute.manager [req-6ab05ddc-3b9b-49b8-a7d5-bcb113fc8aeb req-34dd9288-ec44-475b-9e80-cc089a490954 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received event network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.764 189512 DEBUG oslo_concurrency.lockutils [req-6ab05ddc-3b9b-49b8-a7d5-bcb113fc8aeb req-34dd9288-ec44-475b-9e80-cc089a490954 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.770 189512 DEBUG oslo_concurrency.lockutils [req-6ab05ddc-3b9b-49b8-a7d5-bcb113fc8aeb req-34dd9288-ec44-475b-9e80-cc089a490954 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.772 189512 DEBUG oslo_concurrency.lockutils [req-6ab05ddc-3b9b-49b8-a7d5-bcb113fc8aeb req-34dd9288-ec44-475b-9e80-cc089a490954 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:56 compute-0 nova_compute[189508]: 2025-12-01 22:40:56.773 189512 DEBUG nova.compute.manager [req-6ab05ddc-3b9b-49b8-a7d5-bcb113fc8aeb req-34dd9288-ec44-475b-9e80-cc089a490954 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Processing event network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.154 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628857.1522481, dae82663-6de4-4397-8aab-9559ddeaec24 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.158 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] VM Started (Lifecycle Event)#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.161 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.168 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.174 189512 INFO nova.virt.libvirt.driver [-] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Instance spawned successfully.#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.174 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.176 189512 DEBUG nova.network.neutron [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updated VIF entry in instance network info cache for port d4f1e6ff-9498-4994-811a-29c1f1b406a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.177 189512 DEBUG nova.network.neutron [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updating instance_info_cache with network_info: [{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.206 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.216 189512 DEBUG oslo_concurrency.lockutils [req-c9030fba-7e3f-4b0d-9797-41c7b9b1fc7b req-9614dbe7-524b-4425-85e9-ac09684eb780 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.224 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.232 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.233 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.234 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.236 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.237 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.238 189512 DEBUG nova.virt.libvirt.driver [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.261 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.262 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628857.1526732, dae82663-6de4-4397-8aab-9559ddeaec24 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.262 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.285 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.291 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764628857.1658964, dae82663-6de4-4397-8aab-9559ddeaec24 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.291 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.295 189512 INFO nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Took 7.74 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.295 189512 DEBUG nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.309 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.315 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.344 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.360 189512 INFO nova.compute.manager [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Took 8.32 seconds to build instance.#033[00m
Dec  1 22:40:57 compute-0 nova_compute[189508]: 2025-12-01 22:40:57.376 189512 DEBUG oslo_concurrency.lockutils [None req-49f4896d-c1f9-4edf-b51f-a58bcb96446e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:57 compute-0 podman[243773]: 2025-12-01 22:40:57.855154093 +0000 UTC m=+0.128022894 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:40:58 compute-0 nova_compute[189508]: 2025-12-01 22:40:58.850 189512 DEBUG nova.compute.manager [req-0c7124ee-d8a2-4a12-9e66-9c2fac8cab3b req-eb4e5648-519e-479b-a558-0842ed7e0597 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received event network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:40:58 compute-0 nova_compute[189508]: 2025-12-01 22:40:58.851 189512 DEBUG oslo_concurrency.lockutils [req-0c7124ee-d8a2-4a12-9e66-9c2fac8cab3b req-eb4e5648-519e-479b-a558-0842ed7e0597 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:40:58 compute-0 nova_compute[189508]: 2025-12-01 22:40:58.851 189512 DEBUG oslo_concurrency.lockutils [req-0c7124ee-d8a2-4a12-9e66-9c2fac8cab3b req-eb4e5648-519e-479b-a558-0842ed7e0597 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:40:58 compute-0 nova_compute[189508]: 2025-12-01 22:40:58.851 189512 DEBUG oslo_concurrency.lockutils [req-0c7124ee-d8a2-4a12-9e66-9c2fac8cab3b req-eb4e5648-519e-479b-a558-0842ed7e0597 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:40:58 compute-0 nova_compute[189508]: 2025-12-01 22:40:58.852 189512 DEBUG nova.compute.manager [req-0c7124ee-d8a2-4a12-9e66-9c2fac8cab3b req-eb4e5648-519e-479b-a558-0842ed7e0597 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] No waiting events found dispatching network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:40:58 compute-0 nova_compute[189508]: 2025-12-01 22:40:58.852 189512 WARNING nova.compute.manager [req-0c7124ee-d8a2-4a12-9e66-9c2fac8cab3b req-eb4e5648-519e-479b-a558-0842ed7e0597 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received unexpected event network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:40:59 compute-0 podman[203693]: time="2025-12-01T22:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:40:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:40:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4779 "" "Go-http-client/1.1"
Dec  1 22:40:59 compute-0 nova_compute[189508]: 2025-12-01 22:40:59.847 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:00 compute-0 nova_compute[189508]: 2025-12-01 22:41:00.308 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:00 compute-0 podman[243797]: 2025-12-01 22:41:00.851987065 +0000 UTC m=+0.120128235 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:41:01 compute-0 openstack_network_exporter[205887]: ERROR   22:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:41:01 compute-0 openstack_network_exporter[205887]: ERROR   22:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:41:01 compute-0 openstack_network_exporter[205887]: ERROR   22:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:41:01 compute-0 openstack_network_exporter[205887]: ERROR   22:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:41:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:41:01 compute-0 openstack_network_exporter[205887]: ERROR   22:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:41:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:41:02 compute-0 podman[243817]: 2025-12-01 22:41:02.880040859 +0000 UTC m=+0.151817880 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:41:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:41:04.616 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:41:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:41:04.617 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:41:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:41:04.618 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:41:04 compute-0 nova_compute[189508]: 2025-12-01 22:41:04.851 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:05 compute-0 nova_compute[189508]: 2025-12-01 22:41:05.313 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:06 compute-0 podman[243839]: 2025-12-01 22:41:06.843627151 +0000 UTC m=+0.112170822 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  1 22:41:06 compute-0 podman[243838]: 2025-12-01 22:41:06.902538342 +0000 UTC m=+0.184027918 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 22:41:09 compute-0 nova_compute[189508]: 2025-12-01 22:41:09.853 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:10 compute-0 nova_compute[189508]: 2025-12-01 22:41:10.317 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:11 compute-0 podman[243883]: 2025-12-01 22:41:11.837222153 +0000 UTC m=+0.088541212 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41)
Dec  1 22:41:11 compute-0 podman[243881]: 2025-12-01 22:41:11.844781775 +0000 UTC m=+0.114328250 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:41:11 compute-0 podman[243882]: 2025-12-01 22:41:11.852431889 +0000 UTC m=+0.106153392 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 22:41:11 compute-0 podman[243889]: 2025-12-01 22:41:11.857887714 +0000 UTC m=+0.106616754 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec  1 22:41:14 compute-0 nova_compute[189508]: 2025-12-01 22:41:14.857 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:15 compute-0 nova_compute[189508]: 2025-12-01 22:41:15.321 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:19 compute-0 nova_compute[189508]: 2025-12-01 22:41:19.860 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:20 compute-0 nova_compute[189508]: 2025-12-01 22:41:20.324 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:24 compute-0 nova_compute[189508]: 2025-12-01 22:41:24.864 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:25 compute-0 nova_compute[189508]: 2025-12-01 22:41:25.327 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:26 compute-0 ovn_controller[97770]: 2025-12-01T22:41:26Z|00049|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Dec  1 22:41:28 compute-0 podman[243961]: 2025-12-01 22:41:28.836336241 +0000 UTC m=+0.097351249 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:41:29 compute-0 podman[203693]: time="2025-12-01T22:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:41:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:41:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec  1 22:41:29 compute-0 nova_compute[189508]: 2025-12-01 22:41:29.868 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:30 compute-0 ovn_controller[97770]: 2025-12-01T22:41:30Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:f6:49 192.168.0.51
Dec  1 22:41:30 compute-0 ovn_controller[97770]: 2025-12-01T22:41:30Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:f6:49 192.168.0.51
Dec  1 22:41:30 compute-0 nova_compute[189508]: 2025-12-01 22:41:30.330 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:31 compute-0 openstack_network_exporter[205887]: ERROR   22:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:41:31 compute-0 openstack_network_exporter[205887]: ERROR   22:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:41:31 compute-0 openstack_network_exporter[205887]: ERROR   22:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:41:31 compute-0 openstack_network_exporter[205887]: ERROR   22:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:41:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:41:31 compute-0 openstack_network_exporter[205887]: ERROR   22:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:41:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:41:31 compute-0 podman[243992]: 2025-12-01 22:41:31.811165298 +0000 UTC m=+0.086382467 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  1 22:41:33 compute-0 podman[244012]: 2025-12-01 22:41:33.86176109 +0000 UTC m=+0.127100694 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec  1 22:41:34 compute-0 nova_compute[189508]: 2025-12-01 22:41:34.193 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:34 compute-0 nova_compute[189508]: 2025-12-01 22:41:34.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:34 compute-0 nova_compute[189508]: 2025-12-01 22:41:34.872 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.267 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.267 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.269 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b25be0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.281 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.285 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance dae82663-6de4-4397-8aab-9559ddeaec24 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:41:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:35.287 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/dae82663-6de4-4397-8aab-9559ddeaec24 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:41:35 compute-0 nova_compute[189508]: 2025-12-01 22:41:35.334 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:36 compute-0 nova_compute[189508]: 2025-12-01 22:41:36.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:36 compute-0 nova_compute[189508]: 2025-12-01 22:41:36.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.350 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 01 Dec 2025 22:41:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cacd4075-fe59-4352-b783-522964bd0a45 x-openstack-request-id: req-cacd4075-fe59-4352-b783-522964bd0a45 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.351 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "dae82663-6de4-4397-8aab-9559ddeaec24", "name": "vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u", "status": "ACTIVE", "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "user_id": "3b810e864d6c4d058e539f62ad181096", "metadata": {"metering.server_group": "40d7879f-33f5-4fcb-8784-d9088730e18f"}, "hostId": "968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d", "image": {"id": "ca09b2c0-a624-4fb0-b624-b8d92d761f4a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ca09b2c0-a624-4fb0-b624-b8d92d761f4a"}]}, "flavor": {"id": "aa9783c0-34c0-4a4d-bc86-59429edc9395", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/aa9783c0-34c0-4a4d-bc86-59429edc9395"}]}, "created": "2025-12-01T22:40:47Z", "updated": "2025-12-01T22:40:57Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.51", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a3:f6:49"}, {"version": 4, "addr": "192.168.122.183", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a3:f6:49"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/dae82663-6de4-4397-8aab-9559ddeaec24"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/dae82663-6de4-4397-8aab-9559ddeaec24"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T22:40:57.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.352 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/dae82663-6de4-4397-8aab-9559ddeaec24 used request id req-cacd4075-fe59-4352-b783-522964bd0a45 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.354 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dae82663-6de4-4397-8aab-9559ddeaec24', 'name': 'vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.361 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '99b450eb-11ab-433d-9cf3-da58ea311e94', 'name': 'vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.367 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ef18b98f-df89-44d0-9215-5c2e556e10be', 'name': 'vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.368 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:41:36.369692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.376 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.383 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for dae82663-6de4-4397-8aab-9559ddeaec24 / tapd4f1e6ff-94 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.383 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.391 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.400 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets volume: 59 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.401 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.403 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.403 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:41:36.402822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.404 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.404 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.407 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:41:36.407264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.408 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.408 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.409 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:41:36.410481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.454 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.455 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.455 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.502 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.503 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.503 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.543 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.544 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.544 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.588 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.589 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.590 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.591 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:41:36.591897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.714 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.715 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.716 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.830 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.831 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.831 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.961 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.962 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:36.962 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.089 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.090 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.092 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.093 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.094 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.095 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.095 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.095 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 529113669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.096 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 125664984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:41:37.094344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.097 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 99600138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.097 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 518522445 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.098 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 95166420 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.098 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 71008121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.099 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 493804988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.099 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 100192430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.100 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 68791964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.101 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.102 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.102 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.103 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.103 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.104 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.104 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:41:37.102351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.105 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.106 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.106 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.107 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.107 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.107 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.109 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.110 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.110 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.110 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.111 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.111 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.112 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.112 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.113 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.113 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.114 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:41:37.109848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.114 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.115 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.116 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.117 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.117 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.117 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.118 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.118 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.118 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.119 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.119 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.119 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.119 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.120 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:41:37.117322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.121 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.122 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.123 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.123 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:41:37.122800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.123 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.124 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.124 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.124 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.124 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.125 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.125 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.125 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.126 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.126 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.127 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.127 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.127 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:41:37.127194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.128 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 1928962619 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.128 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 13544625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.128 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.129 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 1768561782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.129 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 11037405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.129 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.130 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 2018654658 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.130 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 11549778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.130 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.131 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:41:37.131795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.174 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 39640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 nova_compute[189508]: 2025-12-01 22:41:37.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:37 compute-0 nova_compute[189508]: 2025-12-01 22:41:37.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.216 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/cpu volume: 32800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.257 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/cpu volume: 34180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.301 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/cpu volume: 386970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.302 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.304 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.304 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.304 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.305 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:41:37.302958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.308 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:41:37.306976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.308 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.309 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes.delta volume: 182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.309 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:41:37.312038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.313 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.314 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.315 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.316 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.316 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.318 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.318 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.319 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.319 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.320 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.320 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.321 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.325 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.325 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u>]
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.330 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T22:41:37.325123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:41:37.327240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:41:37.331657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.332 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.333 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.334 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.334 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:41:37.336153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.338 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:41:37.338436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.339 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.339 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.340 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.341 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.341 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.341 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.342 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.342 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.343 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:41:37.341465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:41:37.344158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.344 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.345 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes volume: 1666 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.345 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.346 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes volume: 7130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.347 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.348 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.348 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes.delta volume: 1143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.348 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes.delta volume: 2474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.349 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.350 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.78515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:41:37.347528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.350 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/memory.usage volume: 49.578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:41:37.350092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.351 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.351 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/memory.usage volume: 49.0078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.352 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.352 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.352 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.352 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u>]
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.353 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T22:41:37.352266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.353 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:41:37.353718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.354 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.354 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.355 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:41:37.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:41:37 compute-0 nova_compute[189508]: 2025-12-01 22:41:37.459 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:41:37 compute-0 nova_compute[189508]: 2025-12-01 22:41:37.459 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:41:37 compute-0 nova_compute[189508]: 2025-12-01 22:41:37.460 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:41:37 compute-0 podman[244034]: 2025-12-01 22:41:37.844899121 +0000 UTC m=+0.106807488 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 22:41:37 compute-0 podman[244033]: 2025-12-01 22:41:37.929840285 +0000 UTC m=+0.195069836 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.640 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [{"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.663 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.664 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.665 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.666 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.700 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.702 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.702 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.703 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.864 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.897 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.989 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:39 compute-0 nova_compute[189508]: 2025-12-01 22:41:39.991 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.092 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.094 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.199 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.201 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.291 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.305 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.338 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.402 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.403 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.497 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.498 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.591 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.593 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.687 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.699 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.799 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.802 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.892 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.896 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.961 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:40 compute-0 nova_compute[189508]: 2025-12-01 22:41:40.964 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.062 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.077 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.166 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.168 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.246 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.248 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.314 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.320 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.394 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.940 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.942 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4610MB free_disk=72.13367080688477GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.943 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:41:41 compute-0 nova_compute[189508]: 2025-12-01 22:41:41.943 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.079 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.080 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.080 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 99b450eb-11ab-433d-9cf3-da58ea311e94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.080 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.081 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.081 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.217 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.285 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.325 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.326 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.858 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:42 compute-0 nova_compute[189508]: 2025-12-01 22:41:42.859 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:42 compute-0 podman[244123]: 2025-12-01 22:41:42.861031529 +0000 UTC m=+0.124825110 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:41:42 compute-0 podman[244124]: 2025-12-01 22:41:42.882697954 +0000 UTC m=+0.133584768 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm)
Dec  1 22:41:42 compute-0 podman[244125]: 2025-12-01 22:41:42.891109694 +0000 UTC m=+0.147835984 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., release=1755695350, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 22:41:42 compute-0 podman[244126]: 2025-12-01 22:41:42.896030733 +0000 UTC m=+0.142219273 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-type=git, io.buildah.version=1.29.0, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 22:41:44 compute-0 nova_compute[189508]: 2025-12-01 22:41:44.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:41:44 compute-0 nova_compute[189508]: 2025-12-01 22:41:44.879 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:45 compute-0 nova_compute[189508]: 2025-12-01 22:41:45.343 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:49 compute-0 nova_compute[189508]: 2025-12-01 22:41:49.883 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:50 compute-0 nova_compute[189508]: 2025-12-01 22:41:50.346 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:54 compute-0 nova_compute[189508]: 2025-12-01 22:41:54.886 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:55 compute-0 nova_compute[189508]: 2025-12-01 22:41:55.349 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:41:59 compute-0 podman[203693]: time="2025-12-01T22:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:41:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:41:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Dec  1 22:41:59 compute-0 podman[244205]: 2025-12-01 22:41:59.852398226 +0000 UTC m=+0.118635433 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:41:59 compute-0 nova_compute[189508]: 2025-12-01 22:41:59.889 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:00 compute-0 nova_compute[189508]: 2025-12-01 22:42:00.353 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:01 compute-0 openstack_network_exporter[205887]: ERROR   22:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:42:01 compute-0 openstack_network_exporter[205887]: ERROR   22:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:42:01 compute-0 openstack_network_exporter[205887]: ERROR   22:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:42:01 compute-0 openstack_network_exporter[205887]: ERROR   22:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:42:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:42:01 compute-0 openstack_network_exporter[205887]: ERROR   22:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:42:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:42:02 compute-0 podman[244228]: 2025-12-01 22:42:02.819330059 +0000 UTC m=+0.096673840 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:42:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:42:04.618 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:42:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:42:04.619 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:42:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:42:04.620 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:42:04 compute-0 podman[244249]: 2025-12-01 22:42:04.84978826 +0000 UTC m=+0.130695817 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  1 22:42:04 compute-0 nova_compute[189508]: 2025-12-01 22:42:04.893 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:05 compute-0 nova_compute[189508]: 2025-12-01 22:42:05.356 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:08 compute-0 podman[244270]: 2025-12-01 22:42:08.85503672 +0000 UTC m=+0.113418315 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 22:42:08 compute-0 podman[244269]: 2025-12-01 22:42:08.885448754 +0000 UTC m=+0.156072238 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Dec  1 22:42:09 compute-0 nova_compute[189508]: 2025-12-01 22:42:09.896 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:10 compute-0 nova_compute[189508]: 2025-12-01 22:42:10.361 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:13 compute-0 podman[244314]: 2025-12-01 22:42:13.829785699 +0000 UTC m=+0.092975644 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec  1 22:42:13 compute-0 podman[244315]: 2025-12-01 22:42:13.854360008 +0000 UTC m=+0.100395275 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public)
Dec  1 22:42:13 compute-0 podman[244313]: 2025-12-01 22:42:13.856641233 +0000 UTC m=+0.119266582 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:42:13 compute-0 podman[244326]: 2025-12-01 22:42:13.865328789 +0000 UTC m=+0.098366598 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, io.buildah.version=1.29.0, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=)
Dec  1 22:42:14 compute-0 nova_compute[189508]: 2025-12-01 22:42:14.900 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:15 compute-0 nova_compute[189508]: 2025-12-01 22:42:15.365 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:19 compute-0 nova_compute[189508]: 2025-12-01 22:42:19.903 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:20 compute-0 nova_compute[189508]: 2025-12-01 22:42:20.369 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:24 compute-0 nova_compute[189508]: 2025-12-01 22:42:24.907 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:25 compute-0 nova_compute[189508]: 2025-12-01 22:42:25.374 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:29 compute-0 podman[203693]: time="2025-12-01T22:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:42:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:42:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Dec  1 22:42:29 compute-0 nova_compute[189508]: 2025-12-01 22:42:29.911 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:30 compute-0 nova_compute[189508]: 2025-12-01 22:42:30.377 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:30 compute-0 podman[244394]: 2025-12-01 22:42:30.853502862 +0000 UTC m=+0.120945948 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:42:31 compute-0 nova_compute[189508]: 2025-12-01 22:42:31.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:31 compute-0 openstack_network_exporter[205887]: ERROR   22:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:42:31 compute-0 openstack_network_exporter[205887]: ERROR   22:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:42:31 compute-0 openstack_network_exporter[205887]: ERROR   22:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:42:31 compute-0 openstack_network_exporter[205887]: ERROR   22:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:42:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:42:31 compute-0 openstack_network_exporter[205887]: ERROR   22:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:42:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:42:33 compute-0 podman[244418]: 2025-12-01 22:42:33.892817953 +0000 UTC m=+0.152628310 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:42:34 compute-0 nova_compute[189508]: 2025-12-01 22:42:34.235 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:34 compute-0 nova_compute[189508]: 2025-12-01 22:42:34.915 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:35 compute-0 nova_compute[189508]: 2025-12-01 22:42:35.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:35 compute-0 nova_compute[189508]: 2025-12-01 22:42:35.381 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:35 compute-0 podman[244438]: 2025-12-01 22:42:35.831231467 +0000 UTC m=+0.108338441 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 22:42:37 compute-0 nova_compute[189508]: 2025-12-01 22:42:37.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:38 compute-0 nova_compute[189508]: 2025-12-01 22:42:38.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:38 compute-0 nova_compute[189508]: 2025-12-01 22:42:38.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:42:39 compute-0 nova_compute[189508]: 2025-12-01 22:42:39.136 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:42:39 compute-0 nova_compute[189508]: 2025-12-01 22:42:39.136 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:42:39 compute-0 nova_compute[189508]: 2025-12-01 22:42:39.137 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:42:39 compute-0 podman[244459]: 2025-12-01 22:42:39.825948217 +0000 UTC m=+0.092755158 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 22:42:39 compute-0 podman[244458]: 2025-12-01 22:42:39.901095204 +0000 UTC m=+0.175019857 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:42:39 compute-0 nova_compute[189508]: 2025-12-01 22:42:39.918 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:40 compute-0 nova_compute[189508]: 2025-12-01 22:42:40.386 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:40 compute-0 nova_compute[189508]: 2025-12-01 22:42:40.969 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updating instance_info_cache with network_info: [{"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.003 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.004 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.005 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.006 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.007 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.042 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.043 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.044 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.045 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.226 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.340 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.342 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.441 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.444 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.530 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.535 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.630 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.647 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.726 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.728 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.792 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.794 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.882 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.883 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.948 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:41 compute-0 nova_compute[189508]: 2025-12-01 22:42:41.958 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.024 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.026 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.155 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.157 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.256 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.258 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.322 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.335 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.411 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.413 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.511 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.513 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.616 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.618 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:42:42 compute-0 nova_compute[189508]: 2025-12-01 22:42:42.720 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.151 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.153 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4596MB free_disk=72.13369750976562GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.153 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.154 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.401 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.402 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.402 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 99b450eb-11ab-433d-9cf3-da58ea311e94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.402 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.403 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.403 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.510 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.614 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.615 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.641 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.678 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.829 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.848 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.851 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.852 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:42:43 compute-0 nova_compute[189508]: 2025-12-01 22:42:43.853 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:44 compute-0 nova_compute[189508]: 2025-12-01 22:42:44.058 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:44 compute-0 nova_compute[189508]: 2025-12-01 22:42:44.059 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:44 compute-0 nova_compute[189508]: 2025-12-01 22:42:44.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:44 compute-0 nova_compute[189508]: 2025-12-01 22:42:44.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 22:42:44 compute-0 nova_compute[189508]: 2025-12-01 22:42:44.216 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 22:42:44 compute-0 podman[244549]: 2025-12-01 22:42:44.849548447 +0000 UTC m=+0.115299639 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 22:42:44 compute-0 podman[244551]: 2025-12-01 22:42:44.863604056 +0000 UTC m=+0.119520529 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, distribution-scope=public, build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 22:42:44 compute-0 podman[244550]: 2025-12-01 22:42:44.868644469 +0000 UTC m=+0.125825898 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vendor=Red Hat, Inc., version=9.6)
Dec  1 22:42:44 compute-0 podman[244548]: 2025-12-01 22:42:44.86970775 +0000 UTC m=+0.145629901 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:42:44 compute-0 nova_compute[189508]: 2025-12-01 22:42:44.921 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:45 compute-0 nova_compute[189508]: 2025-12-01 22:42:45.389 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:46 compute-0 nova_compute[189508]: 2025-12-01 22:42:46.216 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:48 compute-0 nova_compute[189508]: 2025-12-01 22:42:48.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:42:48 compute-0 nova_compute[189508]: 2025-12-01 22:42:48.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 22:42:49 compute-0 nova_compute[189508]: 2025-12-01 22:42:49.923 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:50 compute-0 nova_compute[189508]: 2025-12-01 22:42:50.393 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:54 compute-0 nova_compute[189508]: 2025-12-01 22:42:54.926 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:55 compute-0 nova_compute[189508]: 2025-12-01 22:42:55.398 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:42:57 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 22:42:59 compute-0 podman[203693]: time="2025-12-01T22:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:42:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:42:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Dec  1 22:42:59 compute-0 nova_compute[189508]: 2025-12-01 22:42:59.931 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:00 compute-0 nova_compute[189508]: 2025-12-01 22:43:00.402 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:01 compute-0 openstack_network_exporter[205887]: ERROR   22:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:43:01 compute-0 openstack_network_exporter[205887]: ERROR   22:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:43:01 compute-0 openstack_network_exporter[205887]: ERROR   22:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:43:01 compute-0 openstack_network_exporter[205887]: ERROR   22:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:43:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:43:01 compute-0 openstack_network_exporter[205887]: ERROR   22:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:43:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:43:01 compute-0 podman[244631]: 2025-12-01 22:43:01.812374976 +0000 UTC m=+0.082308691 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:43:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:43:04.619 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:43:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:43:04.620 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:43:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:43:04.621 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:43:04 compute-0 podman[244655]: 2025-12-01 22:43:04.865817257 +0000 UTC m=+0.138046985 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 22:43:04 compute-0 nova_compute[189508]: 2025-12-01 22:43:04.936 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:05 compute-0 nova_compute[189508]: 2025-12-01 22:43:05.405 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:06 compute-0 podman[244675]: 2025-12-01 22:43:06.848245352 +0000 UTC m=+0.112219691 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 22:43:09 compute-0 nova_compute[189508]: 2025-12-01 22:43:09.938 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:10 compute-0 nova_compute[189508]: 2025-12-01 22:43:10.409 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:10 compute-0 podman[244696]: 2025-12-01 22:43:10.856829228 +0000 UTC m=+0.119773976 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:43:10 compute-0 podman[244695]: 2025-12-01 22:43:10.914174908 +0000 UTC m=+0.188221852 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.057 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.098 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid db72b066-1974-41bb-a917-13b5ba129196 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.099 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid ef18b98f-df89-44d0-9215-5c2e556e10be _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.099 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid 99b450eb-11ab-433d-9cf3-da58ea311e94 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.099 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid dae82663-6de4-4397-8aab-9559ddeaec24 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.099 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.100 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "db72b066-1974-41bb-a917-13b5ba129196" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.100 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.101 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.102 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.102 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.103 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.103 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "dae82663-6de4-4397-8aab-9559ddeaec24" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.185 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.186 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.218 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "dae82663-6de4-4397-8aab-9559ddeaec24" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.228 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "db72b066-1974-41bb-a917-13b5ba129196" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:43:14 compute-0 nova_compute[189508]: 2025-12-01 22:43:14.942 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:15 compute-0 nova_compute[189508]: 2025-12-01 22:43:15.413 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:15 compute-0 podman[244739]: 2025-12-01 22:43:15.879780228 +0000 UTC m=+0.131998394 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:43:15 compute-0 podman[244740]: 2025-12-01 22:43:15.882439813 +0000 UTC m=+0.140399762 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 22:43:15 compute-0 podman[244747]: 2025-12-01 22:43:15.895022671 +0000 UTC m=+0.130023547 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc.)
Dec  1 22:43:15 compute-0 podman[244741]: 2025-12-01 22:43:15.906524448 +0000 UTC m=+0.148075380 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 22:43:19 compute-0 nova_compute[189508]: 2025-12-01 22:43:19.945 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:20 compute-0 nova_compute[189508]: 2025-12-01 22:43:20.429 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:24 compute-0 nova_compute[189508]: 2025-12-01 22:43:24.949 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:25 compute-0 nova_compute[189508]: 2025-12-01 22:43:25.433 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:29 compute-0 podman[203693]: time="2025-12-01T22:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:43:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:43:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec  1 22:43:29 compute-0 nova_compute[189508]: 2025-12-01 22:43:29.951 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:30 compute-0 nova_compute[189508]: 2025-12-01 22:43:30.439 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:31 compute-0 openstack_network_exporter[205887]: ERROR   22:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:43:31 compute-0 openstack_network_exporter[205887]: ERROR   22:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:43:31 compute-0 openstack_network_exporter[205887]: ERROR   22:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:43:31 compute-0 openstack_network_exporter[205887]: ERROR   22:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:43:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:43:31 compute-0 openstack_network_exporter[205887]: ERROR   22:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:43:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:43:32 compute-0 podman[244820]: 2025-12-01 22:43:32.860940156 +0000 UTC m=+0.122115976 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:43:34 compute-0 nova_compute[189508]: 2025-12-01 22:43:34.241 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:34 compute-0 nova_compute[189508]: 2025-12-01 22:43:34.955 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.267 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.268 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.268 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.270 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac33e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.280 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.285 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dae82663-6de4-4397-8aab-9559ddeaec24', 'name': 'vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.288 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '99b450eb-11ab-433d-9cf3-da58ea311e94', 'name': 'vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.292 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ef18b98f-df89-44d0-9215-5c2e556e10be', 'name': 'vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:43:35.293625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.299 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.305 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.311 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.317 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.318 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.320 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.320 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:43:35.319233) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.320 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.321 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.321 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.322 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:43:35.322195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.323 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.323 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.323 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:43:35.324840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.373 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.374 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.374 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.411 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.412 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.412 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 nova_compute[189508]: 2025-12-01 22:43:35.444 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.450 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.452 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.452 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.494 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.495 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.495 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:43:35.498019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.599 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.600 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.600 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.693 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.693 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.694 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.795 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.795 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.795 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 podman[244846]: 2025-12-01 22:43:35.861104807 +0000 UTC m=+0.136972198 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.904 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.905 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.905 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.908 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:43:35.908907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.909 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.910 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.911 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.911 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 529113669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.912 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 125664984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.912 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 99600138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.913 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 518522445 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.913 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 95166420 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.913 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 71008121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.914 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 493804988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.914 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 100192430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.915 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.latency volume: 68791964 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.917 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.917 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.918 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.918 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.919 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.919 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.920 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.921 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.921 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.922 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.923 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.923 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.924 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.925 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.926 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:43:35.917701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.926 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.926 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.927 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.927 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.927 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.927 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.928 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.928 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.928 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.929 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.929 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.929 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.930 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.930 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.930 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:43:35.927027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.933 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.935 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.936 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.937 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.938 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.938 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.939 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:43:35.934559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.940 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.941 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.941 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.942 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.942 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.943 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.945 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.946 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.946 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.947 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.947 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.948 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:43:35.946389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.949 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.949 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.950 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.950 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.951 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.951 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.952 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.952 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.954 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.955 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.956 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.957 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:43:35.955169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.957 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 1954219616 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.958 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 13544625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.958 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.959 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 1768561782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.959 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 11037405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.960 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.961 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 2018654658 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.961 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 11549778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.962 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.964 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:43:35.964817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:35.999 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 41580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.036 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/cpu volume: 34840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.063 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/cpu volume: 36100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.096 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/cpu volume: 388920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.097 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.097 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.098 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.098 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.098 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.099 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.099 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:43:36.097697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.099 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.100 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.100 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.100 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:43:36.099599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.101 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.102 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.102 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.102 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.102 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.103 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.103 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.103 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.103 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:43:36.101214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.105 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.106 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.106 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.106 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.106 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.107 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:43:36.104945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:43:36.106034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.108 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.109 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.109 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.109 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.110 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.111 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.111 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.113 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.114 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.115 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.115 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes volume: 7200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:43:36.107687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.118 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.119 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes.delta volume: 592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:43:36.108819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.120 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.120 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:43:36.110413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:43:36.111928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.122 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:43:36.118711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.123 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.78515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.124 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.124 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.125 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:43:36.123503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:43:36.127951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.128 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.129 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.129 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.130 14 DEBUG ceilometer.compute.pollsters [-] ef18b98f-df89-44d0-9215-5c2e556e10be/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:43:36.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:43:36 compute-0 nova_compute[189508]: 2025-12-01 22:43:36.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:37 compute-0 podman[244865]: 2025-12-01 22:43:37.886469631 +0000 UTC m=+0.158919390 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:43:39 compute-0 nova_compute[189508]: 2025-12-01 22:43:39.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:39 compute-0 nova_compute[189508]: 2025-12-01 22:43:39.959 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.449 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.809 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.811 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.813 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:43:40 compute-0 nova_compute[189508]: 2025-12-01 22:43:40.814 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:43:41 compute-0 podman[244886]: 2025-12-01 22:43:41.812455249 +0000 UTC m=+0.089147020 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 22:43:41 compute-0 podman[244885]: 2025-12-01 22:43:41.891171222 +0000 UTC m=+0.169150200 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.215 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.263 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.264 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.266 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.266 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.266 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.267 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.304 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.304 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.305 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.305 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.434 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.535 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.537 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.602 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.603 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.674 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.676 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.742 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.752 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.819 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.821 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.893 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.894 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.956 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:43 compute-0 nova_compute[189508]: 2025-12-01 22:43:43.957 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.033 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.045 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.119 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.120 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.188 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.190 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.286 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.288 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.357 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.369 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.440 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.441 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.528 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.530 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.592 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.594 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.661 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:43:44 compute-0 nova_compute[189508]: 2025-12-01 22:43:44.963 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.233 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.234 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4597MB free_disk=72.13357543945312GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.350 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.350 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance ef18b98f-df89-44d0-9215-5c2e556e10be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.350 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 99b450eb-11ab-433d-9cf3-da58ea311e94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.350 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.351 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.351 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.454 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.477 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.492 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.495 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:43:45 compute-0 nova_compute[189508]: 2025-12-01 22:43:45.496 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:43:46 compute-0 nova_compute[189508]: 2025-12-01 22:43:46.429 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:46 compute-0 podman[244982]: 2025-12-01 22:43:46.812089928 +0000 UTC m=+0.089709587 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:43:46 compute-0 podman[244983]: 2025-12-01 22:43:46.854742098 +0000 UTC m=+0.116619330 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3)
Dec  1 22:43:46 compute-0 podman[244989]: 2025-12-01 22:43:46.869165357 +0000 UTC m=+0.123842225 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 22:43:46 compute-0 podman[244993]: 2025-12-01 22:43:46.875992821 +0000 UTC m=+0.123759972 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 22:43:48 compute-0 nova_compute[189508]: 2025-12-01 22:43:48.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:43:49 compute-0 nova_compute[189508]: 2025-12-01 22:43:49.966 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:50 compute-0 nova_compute[189508]: 2025-12-01 22:43:50.457 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:54 compute-0 nova_compute[189508]: 2025-12-01 22:43:54.971 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:55 compute-0 nova_compute[189508]: 2025-12-01 22:43:55.461 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:43:59 compute-0 podman[203693]: time="2025-12-01T22:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:43:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:43:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 22:43:59 compute-0 nova_compute[189508]: 2025-12-01 22:43:59.976 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:00 compute-0 nova_compute[189508]: 2025-12-01 22:44:00.469 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:01 compute-0 openstack_network_exporter[205887]: ERROR   22:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:44:01 compute-0 openstack_network_exporter[205887]: ERROR   22:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:44:01 compute-0 openstack_network_exporter[205887]: ERROR   22:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:44:01 compute-0 openstack_network_exporter[205887]: ERROR   22:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:44:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:44:01 compute-0 openstack_network_exporter[205887]: ERROR   22:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:44:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:44:03 compute-0 podman[245063]: 2025-12-01 22:44:03.865784954 +0000 UTC m=+0.125681975 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:44:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:04.620 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:04.621 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:04.622 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:04 compute-0 nova_compute[189508]: 2025-12-01 22:44:04.979 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:05 compute-0 nova_compute[189508]: 2025-12-01 22:44:05.474 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:06 compute-0 podman[245088]: 2025-12-01 22:44:06.879381547 +0000 UTC m=+0.138155791 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:44:08 compute-0 podman[245107]: 2025-12-01 22:44:08.900533401 +0000 UTC m=+0.167144713 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:44:09 compute-0 nova_compute[189508]: 2025-12-01 22:44:09.983 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:10 compute-0 nova_compute[189508]: 2025-12-01 22:44:10.478 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:12 compute-0 podman[245127]: 2025-12-01 22:44:12.864408374 +0000 UTC m=+0.117251118 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 22:44:12 compute-0 podman[245126]: 2025-12-01 22:44:12.911665465 +0000 UTC m=+0.181000106 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:44:14 compute-0 nova_compute[189508]: 2025-12-01 22:44:14.986 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:15 compute-0 nova_compute[189508]: 2025-12-01 22:44:15.483 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:17 compute-0 podman[245171]: 2025-12-01 22:44:17.809415194 +0000 UTC m=+0.085650761 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 22:44:17 compute-0 podman[245173]: 2025-12-01 22:44:17.822131505 +0000 UTC m=+0.087788452 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, io.buildah.version=1.29.0, architecture=x86_64, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 22:44:17 compute-0 podman[245170]: 2025-12-01 22:44:17.830870973 +0000 UTC m=+0.111703550 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:44:17 compute-0 podman[245172]: 2025-12-01 22:44:17.8350103 +0000 UTC m=+0.108764696 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec  1 22:44:19 compute-0 nova_compute[189508]: 2025-12-01 22:44:19.989 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:20 compute-0 nova_compute[189508]: 2025-12-01 22:44:20.487 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:24 compute-0 nova_compute[189508]: 2025-12-01 22:44:24.993 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:25 compute-0 nova_compute[189508]: 2025-12-01 22:44:25.490 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:29 compute-0 podman[203693]: time="2025-12-01T22:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:44:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:44:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  1 22:44:29 compute-0 nova_compute[189508]: 2025-12-01 22:44:29.998 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:30 compute-0 nova_compute[189508]: 2025-12-01 22:44:30.492 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:31 compute-0 openstack_network_exporter[205887]: ERROR   22:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:44:31 compute-0 openstack_network_exporter[205887]: ERROR   22:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:44:31 compute-0 openstack_network_exporter[205887]: ERROR   22:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:44:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:44:31 compute-0 openstack_network_exporter[205887]: ERROR   22:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:44:31 compute-0 openstack_network_exporter[205887]: ERROR   22:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:44:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:44:32 compute-0 nova_compute[189508]: 2025-12-01 22:44:32.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:34 compute-0 nova_compute[189508]: 2025-12-01 22:44:34.253 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:34 compute-0 podman[245252]: 2025-12-01 22:44:34.873792275 +0000 UTC m=+0.145053886 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:44:35 compute-0 nova_compute[189508]: 2025-12-01 22:44:35.001 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:35 compute-0 nova_compute[189508]: 2025-12-01 22:44:35.495 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:37 compute-0 podman[245275]: 2025-12-01 22:44:37.838181461 +0000 UTC m=+0.104611749 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:44:38 compute-0 nova_compute[189508]: 2025-12-01 22:44:38.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:39 compute-0 podman[245295]: 2025-12-01 22:44:39.878483788 +0000 UTC m=+0.140319112 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:44:40 compute-0 nova_compute[189508]: 2025-12-01 22:44:40.004 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:40 compute-0 nova_compute[189508]: 2025-12-01 22:44:40.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:40 compute-0 nova_compute[189508]: 2025-12-01 22:44:40.499 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.273 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.274 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.274 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.275 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.275 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.277 189512 INFO nova.compute.manager [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Terminating instance#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.279 189512 DEBUG nova.compute.manager [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:44:41 compute-0 kernel: tap112b3e51-47 (unregistering): left promiscuous mode
Dec  1 22:44:41 compute-0 NetworkManager[56278]: <info>  [1764629081.3286] device (tap112b3e51-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:44:41 compute-0 ovn_controller[97770]: 2025-12-01T22:44:41Z|00050|binding|INFO|Releasing lport 112b3e51-47c2-499f-9108-af9d45576c1e from this chassis (sb_readonly=0)
Dec  1 22:44:41 compute-0 ovn_controller[97770]: 2025-12-01T22:44:41Z|00051|binding|INFO|Setting lport 112b3e51-47c2-499f-9108-af9d45576c1e down in Southbound
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.341 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 ovn_controller[97770]: 2025-12-01T22:44:41Z|00052|binding|INFO|Removing iface tap112b3e51-47 ovn-installed in OVS
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.347 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.356 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:04:8b 192.168.0.23'], port_security=['fa:16:3e:96:04:8b 192.168.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-37pfkxggku2d-mb7dw7aouq46-553w42hrmnbi-port-am2gni7fe4iu', 'neutron:cidrs': '192.168.0.23/24', 'neutron:device_id': 'ef18b98f-df89-44d0-9215-5c2e556e10be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-37pfkxggku2d-mb7dw7aouq46-553w42hrmnbi-port-am2gni7fe4iu', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=112b3e51-47c2-499f-9108-af9d45576c1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.358 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 112b3e51-47c2-499f-9108-af9d45576c1e in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c unbound from our chassis#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.359 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.364 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.384 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c707d348-467b-4bbd-8e53-1f847029f205]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:44:41 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  1 22:44:41 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 56.424s CPU time.
Dec  1 22:44:41 compute-0 systemd-machined[155759]: Machine qemu-2-instance-00000002 terminated.
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.436 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[8244ef0b-8dc6-4b06-9d1f-16abc0dc55cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.441 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2f0fca-d7ea-4669-a85e-8a0411be278b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.496 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[93d45c35-2c21-4895-ad38-a05bbc1b3a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.522 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.535 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[983f0e70-a8f8-4505-b010-8053bc8a28a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 30718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245329, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.540 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.563 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[836b811e-ce27-4676-a937-80653114ae30]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384779, 'tstamp': 384779}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245336, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384784, 'tstamp': 384784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245336, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.566 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.569 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.578 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.579 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd6e3c27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.579 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.580 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd6e3c27-10, col_values=(('external_ids', {'iface-id': 'e303b09b-4673-4950-aa2d-91085a5bc5f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.581 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.622 189512 INFO nova.virt.libvirt.driver [-] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Instance destroyed successfully.#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.623 189512 DEBUG nova.objects.instance [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'resources' on Instance uuid ef18b98f-df89-44d0-9215-5c2e556e10be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.645 189512 DEBUG nova.virt.libvirt.vif [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:33:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-mb7dw7aouq46-553w42hrmnbi-vnf-ncis5qh6ennv',id=2,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:33:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-gbn10oql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:33:38Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODc3NjIxMjcyMTU2NTcwMDQ4MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 22:44:41 compute-0 nova_compute[189508]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODc3NjIxMjcyMTU2NTcwMDQ4MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg3NzYyMTI3MjE1NjU3MDA0ODA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Nzc2MjEyNzIxNTY1NzAwNDgwPT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=ef18b98f-df89-44d0-9215-5c2e556e10be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.645 189512 DEBUG nova.network.os_vif_util [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "112b3e51-47c2-499f-9108-af9d45576c1e", "address": "fa:16:3e:96:04:8b", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap112b3e51-47", "ovs_interfaceid": "112b3e51-47c2-499f-9108-af9d45576c1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.647 189512 DEBUG nova.network.os_vif_util [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:04:8b,bridge_name='br-int',has_traffic_filtering=True,id=112b3e51-47c2-499f-9108-af9d45576c1e,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap112b3e51-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.647 189512 DEBUG os_vif [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:04:8b,bridge_name='br-int',has_traffic_filtering=True,id=112b3e51-47c2-499f-9108-af9d45576c1e,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap112b3e51-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.652 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.652 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap112b3e51-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.655 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.660 189512 DEBUG nova.compute.manager [req-8d868ae4-aacb-4e0d-807d-d32d6f3c1c47 req-6aac0bec-130a-45cc-ace0-73f93630c867 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received event network-vif-unplugged-112b3e51-47c2-499f-9108-af9d45576c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.661 189512 DEBUG oslo_concurrency.lockutils [req-8d868ae4-aacb-4e0d-807d-d32d6f3c1c47 req-6aac0bec-130a-45cc-ace0-73f93630c867 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.661 189512 DEBUG oslo_concurrency.lockutils [req-8d868ae4-aacb-4e0d-807d-d32d6f3c1c47 req-6aac0bec-130a-45cc-ace0-73f93630c867 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.662 189512 DEBUG oslo_concurrency.lockutils [req-8d868ae4-aacb-4e0d-807d-d32d6f3c1c47 req-6aac0bec-130a-45cc-ace0-73f93630c867 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.662 189512 DEBUG nova.compute.manager [req-8d868ae4-aacb-4e0d-807d-d32d6f3c1c47 req-6aac0bec-130a-45cc-ace0-73f93630c867 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] No waiting events found dispatching network-vif-unplugged-112b3e51-47c2-499f-9108-af9d45576c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.663 189512 DEBUG nova.compute.manager [req-8d868ae4-aacb-4e0d-807d-d32d6f3c1c47 req-6aac0bec-130a-45cc-ace0-73f93630c867 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received event network-vif-unplugged-112b3e51-47c2-499f-9108-af9d45576c1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.663 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.668 189512 INFO os_vif [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:04:8b,bridge_name='br-int',has_traffic_filtering=True,id=112b3e51-47c2-499f-9108-af9d45576c1e,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap112b3e51-47')#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.669 189512 INFO nova.virt.libvirt.driver [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Deleting instance files /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be_del#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.670 189512 INFO nova.virt.libvirt.driver [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Deletion of /var/lib/nova/instances/ef18b98f-df89-44d0-9215-5c2e556e10be_del complete#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.731 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.730 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:41.735 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.759 189512 DEBUG nova.virt.libvirt.host [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.760 189512 INFO nova.virt.libvirt.host [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] UEFI support detected#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.765 189512 INFO nova.compute.manager [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.766 189512 DEBUG oslo.service.loopingcall [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.766 189512 DEBUG nova.compute.manager [-] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:44:41 compute-0 nova_compute[189508]: 2025-12-01 22:44:41.767 189512 DEBUG nova.network.neutron [-] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:44:41 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:44:41.645 189512 DEBUG nova.virt.libvirt.vif [None req-ebd30009-d4 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:44:42 compute-0 nova_compute[189508]: 2025-12-01 22:44:42.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:42 compute-0 nova_compute[189508]: 2025-12-01 22:44:42.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:44:42 compute-0 nova_compute[189508]: 2025-12-01 22:44:42.242 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Dec  1 22:44:42 compute-0 nova_compute[189508]: 2025-12-01 22:44:42.590 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:44:42 compute-0 nova_compute[189508]: 2025-12-01 22:44:42.591 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:44:42 compute-0 nova_compute[189508]: 2025-12-01 22:44:42.592 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.856 189512 DEBUG nova.compute.manager [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received event network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.857 189512 DEBUG oslo_concurrency.lockutils [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.857 189512 DEBUG oslo_concurrency.lockutils [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.857 189512 DEBUG oslo_concurrency.lockutils [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.858 189512 DEBUG nova.compute.manager [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] No waiting events found dispatching network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.858 189512 WARNING nova.compute.manager [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received unexpected event network-vif-plugged-112b3e51-47c2-499f-9108-af9d45576c1e for instance with vm_state active and task_state deleting.#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.859 189512 DEBUG nova.compute.manager [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Received event network-changed-112b3e51-47c2-499f-9108-af9d45576c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.859 189512 DEBUG nova.compute.manager [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Refreshing instance network info cache due to event network-changed-112b3e51-47c2-499f-9108-af9d45576c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.859 189512 DEBUG oslo_concurrency.lockutils [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.860 189512 DEBUG oslo_concurrency.lockutils [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.860 189512 DEBUG nova.network.neutron [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Refreshing network info cache for port 112b3e51-47c2-499f-9108-af9d45576c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.866 189512 DEBUG nova.network.neutron [-] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.890 189512 INFO nova.compute.manager [-] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Took 2.12 seconds to deallocate network for instance.#033[00m
Dec  1 22:44:43 compute-0 podman[245351]: 2025-12-01 22:44:43.893140602 +0000 UTC m=+0.146776166 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 22:44:43 compute-0 podman[245350]: 2025-12-01 22:44:43.923639617 +0000 UTC m=+0.179371560 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.934 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:43 compute-0 nova_compute[189508]: 2025-12-01 22:44:43.935 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.072 189512 DEBUG nova.compute.provider_tree [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.092 189512 DEBUG nova.scheduler.client.report [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.115 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.150 189512 INFO nova.scheduler.client.report [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Deleted allocations for instance ef18b98f-df89-44d0-9215-5c2e556e10be#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.179 189512 INFO nova.network.neutron [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Port 112b3e51-47c2-499f-9108-af9d45576c1e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.179 189512 DEBUG nova.network.neutron [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.211 189512 DEBUG oslo_concurrency.lockutils [req-fc368cf8-50e3-47f8-884b-0e02d8800fec req-d00e16c1-09d6-48ea-b2d9-b7226e5b7a53 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-ef18b98f-df89-44d0-9215-5c2e556e10be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.236 189512 DEBUG oslo_concurrency.lockutils [None req-ebd30009-d4e5-4e78-873f-21a228b0b06b 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ef18b98f-df89-44d0-9215-5c2e556e10be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.373 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updating instance_info_cache with network_info: [{"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.390 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.390 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.390 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.391 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.391 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.409 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.410 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.410 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.410 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.511 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.602 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.604 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.702 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.704 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:44 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:44:44.739 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.806 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.807 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.893 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.900 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.982 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:44 compute-0 nova_compute[189508]: 2025-12-01 22:44:44.983 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.006 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.047 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.048 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.116 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.118 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.220 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.235 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.309 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.311 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.402 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.403 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.482 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.483 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:44:45 compute-0 nova_compute[189508]: 2025-12-01 22:44:45.544 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.085 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.087 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4763MB free_disk=72.15616607666016GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.087 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.088 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.180 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.181 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 99b450eb-11ab-433d-9cf3-da58ea311e94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.181 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.182 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.183 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.273 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.288 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.313 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.314 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:44:46 compute-0 nova_compute[189508]: 2025-12-01 22:44:46.656 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:48 compute-0 podman[245430]: 2025-12-01 22:44:48.852009155 +0000 UTC m=+0.108540000 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:44:48 compute-0 podman[245433]: 2025-12-01 22:44:48.877117418 +0000 UTC m=+0.115951511 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release-0.7.12=, container_name=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  1 22:44:48 compute-0 podman[245432]: 2025-12-01 22:44:48.882936743 +0000 UTC m=+0.130491923 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  1 22:44:48 compute-0 podman[245431]: 2025-12-01 22:44:48.899169894 +0000 UTC m=+0.147637510 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  1 22:44:50 compute-0 nova_compute[189508]: 2025-12-01 22:44:50.009 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:51 compute-0 nova_compute[189508]: 2025-12-01 22:44:51.122 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:44:51 compute-0 nova_compute[189508]: 2025-12-01 22:44:51.660 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:55 compute-0 nova_compute[189508]: 2025-12-01 22:44:55.013 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:56 compute-0 nova_compute[189508]: 2025-12-01 22:44:56.617 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629081.6153936, ef18b98f-df89-44d0-9215-5c2e556e10be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:44:56 compute-0 nova_compute[189508]: 2025-12-01 22:44:56.618 189512 INFO nova.compute.manager [-] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:44:56 compute-0 nova_compute[189508]: 2025-12-01 22:44:56.648 189512 DEBUG nova.compute.manager [None req-77ff33d8-430e-4831-81b9-6c50cf26086f - - - - - -] [instance: ef18b98f-df89-44d0-9215-5c2e556e10be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:44:56 compute-0 nova_compute[189508]: 2025-12-01 22:44:56.663 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:44:59 compute-0 podman[203693]: time="2025-12-01T22:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:44:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:44:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 22:45:00 compute-0 nova_compute[189508]: 2025-12-01 22:45:00.017 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:01 compute-0 openstack_network_exporter[205887]: ERROR   22:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:45:01 compute-0 openstack_network_exporter[205887]: ERROR   22:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:45:01 compute-0 openstack_network_exporter[205887]: ERROR   22:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:45:01 compute-0 openstack_network_exporter[205887]: ERROR   22:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:45:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:45:01 compute-0 openstack_network_exporter[205887]: ERROR   22:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:45:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:45:01 compute-0 nova_compute[189508]: 2025-12-01 22:45:01.667 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:45:04.622 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:45:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:45:04.623 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:45:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:45:04.623 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:45:05 compute-0 nova_compute[189508]: 2025-12-01 22:45:05.019 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:05 compute-0 podman[245508]: 2025-12-01 22:45:05.861778757 +0000 UTC m=+0.123832304 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:45:06 compute-0 nova_compute[189508]: 2025-12-01 22:45:06.670 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:08 compute-0 podman[245532]: 2025-12-01 22:45:08.881213914 +0000 UTC m=+0.141015012 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 22:45:10 compute-0 nova_compute[189508]: 2025-12-01 22:45:10.023 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:10 compute-0 podman[245551]: 2025-12-01 22:45:10.809062202 +0000 UTC m=+0.094495022 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm)
Dec  1 22:45:11 compute-0 nova_compute[189508]: 2025-12-01 22:45:11.675 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:14 compute-0 podman[245571]: 2025-12-01 22:45:14.83301233 +0000 UTC m=+0.114567012 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:45:14 compute-0 podman[245570]: 2025-12-01 22:45:14.864207235 +0000 UTC m=+0.161764291 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:45:15 compute-0 nova_compute[189508]: 2025-12-01 22:45:15.026 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:16 compute-0 ovn_controller[97770]: 2025-12-01T22:45:16Z|00053|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec  1 22:45:16 compute-0 nova_compute[189508]: 2025-12-01 22:45:16.679 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:19 compute-0 podman[245613]: 2025-12-01 22:45:19.859526622 +0000 UTC m=+0.122383013 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:45:19 compute-0 podman[245615]: 2025-12-01 22:45:19.863553206 +0000 UTC m=+0.107628144 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 22:45:19 compute-0 podman[245614]: 2025-12-01 22:45:19.870137493 +0000 UTC m=+0.124645937 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, version=9.6, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Dec  1 22:45:19 compute-0 podman[245612]: 2025-12-01 22:45:19.876489353 +0000 UTC m=+0.141537446 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:45:20 compute-0 nova_compute[189508]: 2025-12-01 22:45:20.030 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:21 compute-0 nova_compute[189508]: 2025-12-01 22:45:21.682 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:25 compute-0 nova_compute[189508]: 2025-12-01 22:45:25.032 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:26 compute-0 nova_compute[189508]: 2025-12-01 22:45:26.686 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:29 compute-0 podman[203693]: time="2025-12-01T22:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:45:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:45:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec  1 22:45:30 compute-0 nova_compute[189508]: 2025-12-01 22:45:30.037 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:31 compute-0 openstack_network_exporter[205887]: ERROR   22:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:45:31 compute-0 openstack_network_exporter[205887]: ERROR   22:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:45:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:45:31 compute-0 openstack_network_exporter[205887]: ERROR   22:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:45:31 compute-0 openstack_network_exporter[205887]: ERROR   22:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:45:31 compute-0 openstack_network_exporter[205887]: ERROR   22:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:45:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:45:31 compute-0 nova_compute[189508]: 2025-12-01 22:45:31.689 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:35 compute-0 nova_compute[189508]: 2025-12-01 22:45:35.039 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:35 compute-0 nova_compute[189508]: 2025-12-01 22:45:35.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.269 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.270 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ac06e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.285 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.291 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dae82663-6de4-4397-8aab-9559ddeaec24', 'name': 'vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.296 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '99b450eb-11ab-433d-9cf3-da58ea311e94', 'name': 'vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.297 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.297 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:45:35.297984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.305 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.312 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.319 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.323 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.324 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:45:35.323354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.325 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.326 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.327 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.328 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:45:35.327675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.329 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:45:35.331127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.380 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.381 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.381 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.433 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.434 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.434 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.483 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.483 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.484 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.484 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:45:35.485463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.619 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.621 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.622 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.757 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.758 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.759 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.846 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.847 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.847 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.849 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:45:35.849733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.850 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.851 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.851 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.852 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 529113669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.852 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 125664984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.852 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 99600138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.853 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 518522445 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.853 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 95166420 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.854 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.latency volume: 71008121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.855 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:45:35.855830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.856 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.856 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.857 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.857 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.857 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.858 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.858 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.858 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.859 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:45:35.861103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.861 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.862 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.862 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.862 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.863 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.863 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.864 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.864 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.864 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:45:35.866528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.867 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.867 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.867 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.868 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.868 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.868 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.869 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.869 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.870 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.871 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.871 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.872 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:45:35.871663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.872 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.872 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.873 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.873 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.873 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.874 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.874 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.874 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.875 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.876 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.876 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:45:35.877359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.878 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.878 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.878 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.879 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 1954219616 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.879 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 13544625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.880 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.880 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 1768561782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.881 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 11037405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.881 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.882 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.882 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.883 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:45:35.883214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.916 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 43470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.948 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/cpu volume: 36790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.976 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/cpu volume: 38070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.977 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.978 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:45:35.978386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.979 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.979 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.979 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.980 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:45:35.981525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.982 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.982 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.982 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.983 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:45:35.984487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.985 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.985 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.985 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.986 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.986 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.986 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.987 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.987 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.987 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:45:35.989989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.991 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:45:35.992145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.992 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.993 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.993 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.994 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.994 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:45:35.995152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.996 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.997 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:45:35.997323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.997 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.998 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.999 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:45:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:35.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:45:36.000003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.000 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.000 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.001 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.001 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:45:36.002632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.002 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.003 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.004 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.005 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:45:36.005707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.005 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.006 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.006 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.007 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.008 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.75390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:45:36.008398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.009 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.009 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.010 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.011 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:45:36.011472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.012 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.012 14 DEBUG ceilometer.compute.pollsters [-] 99b450eb-11ab-433d-9cf3-da58ea311e94/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.013 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.014 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:45:36.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:45:36 compute-0 nova_compute[189508]: 2025-12-01 22:45:36.692 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:36 compute-0 podman[245694]: 2025-12-01 22:45:36.856672317 +0000 UTC m=+0.112842873 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:45:39 compute-0 podman[245717]: 2025-12-01 22:45:39.875896778 +0000 UTC m=+0.136114623 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 22:45:40 compute-0 nova_compute[189508]: 2025-12-01 22:45:40.042 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:40 compute-0 nova_compute[189508]: 2025-12-01 22:45:40.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:40 compute-0 nova_compute[189508]: 2025-12-01 22:45:40.202 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:41 compute-0 nova_compute[189508]: 2025-12-01 22:45:41.696 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:41 compute-0 podman[245737]: 2025-12-01 22:45:41.822095795 +0000 UTC m=+0.100423051 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  1 22:45:42 compute-0 nova_compute[189508]: 2025-12-01 22:45:42.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:42 compute-0 nova_compute[189508]: 2025-12-01 22:45:42.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:45:44 compute-0 nova_compute[189508]: 2025-12-01 22:45:44.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:44 compute-0 nova_compute[189508]: 2025-12-01 22:45:44.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:45:44 compute-0 nova_compute[189508]: 2025-12-01 22:45:44.495 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:45:44 compute-0 nova_compute[189508]: 2025-12-01 22:45:44.496 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:45:44 compute-0 nova_compute[189508]: 2025-12-01 22:45:44.497 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:45:45 compute-0 nova_compute[189508]: 2025-12-01 22:45:45.046 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:45 compute-0 podman[245757]: 2025-12-01 22:45:45.883749322 +0000 UTC m=+0.136321109 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:45:45 compute-0 podman[245756]: 2025-12-01 22:45:45.949707354 +0000 UTC m=+0.205522103 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 22:45:46 compute-0 nova_compute[189508]: 2025-12-01 22:45:46.700 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.316 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updating instance_info_cache with network_info: [{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.350 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.352 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.354 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.355 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.356 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.390 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.391 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.392 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.392 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.480 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.548 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.549 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.652 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.653 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.737 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.738 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.814 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.827 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.915 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:47 compute-0 nova_compute[189508]: 2025-12-01 22:45:47.917 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.004 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.006 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.077 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.078 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.174 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.183 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.245 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.246 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.333 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.334 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.433 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.434 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:45:48 compute-0 nova_compute[189508]: 2025-12-01 22:45:48.532 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.154 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.155 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4763MB free_disk=72.15618515014648GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.156 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.157 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.276 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.277 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 99b450eb-11ab-433d-9cf3-da58ea311e94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.277 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.278 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.278 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.433 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.455 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.459 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:45:49 compute-0 nova_compute[189508]: 2025-12-01 22:45:49.460 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:45:50 compute-0 nova_compute[189508]: 2025-12-01 22:45:50.050 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:50 compute-0 podman[245844]: 2025-12-01 22:45:50.838505019 +0000 UTC m=+0.088643926 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Dec  1 22:45:50 compute-0 podman[245842]: 2025-12-01 22:45:50.866160774 +0000 UTC m=+0.116439095 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  1 22:45:50 compute-0 podman[245841]: 2025-12-01 22:45:50.879327687 +0000 UTC m=+0.138289155 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:45:50 compute-0 podman[245843]: 2025-12-01 22:45:50.883149066 +0000 UTC m=+0.134966340 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal)
Dec  1 22:45:51 compute-0 nova_compute[189508]: 2025-12-01 22:45:51.305 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:45:51 compute-0 nova_compute[189508]: 2025-12-01 22:45:51.704 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:55 compute-0 nova_compute[189508]: 2025-12-01 22:45:55.054 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:56 compute-0 nova_compute[189508]: 2025-12-01 22:45:56.707 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:45:59 compute-0 podman[203693]: time="2025-12-01T22:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:45:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:45:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Dec  1 22:46:00 compute-0 nova_compute[189508]: 2025-12-01 22:46:00.058 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:01 compute-0 openstack_network_exporter[205887]: ERROR   22:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:46:01 compute-0 openstack_network_exporter[205887]: ERROR   22:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:46:01 compute-0 openstack_network_exporter[205887]: ERROR   22:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:46:01 compute-0 openstack_network_exporter[205887]: ERROR   22:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:46:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:46:01 compute-0 openstack_network_exporter[205887]: ERROR   22:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:46:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:46:01 compute-0 nova_compute[189508]: 2025-12-01 22:46:01.710 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:04.624 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:04.625 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:04.625 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:05 compute-0 nova_compute[189508]: 2025-12-01 22:46:05.061 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:06 compute-0 nova_compute[189508]: 2025-12-01 22:46:06.713 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:07 compute-0 podman[245919]: 2025-12-01 22:46:07.847155219 +0000 UTC m=+0.115172159 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:46:10 compute-0 nova_compute[189508]: 2025-12-01 22:46:10.065 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:10 compute-0 podman[245940]: 2025-12-01 22:46:10.814949731 +0000 UTC m=+0.095323646 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  1 22:46:11 compute-0 nova_compute[189508]: 2025-12-01 22:46:11.716 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:12 compute-0 podman[245958]: 2025-12-01 22:46:12.86836957 +0000 UTC m=+0.140443776 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  1 22:46:15 compute-0 nova_compute[189508]: 2025-12-01 22:46:15.070 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:16 compute-0 nova_compute[189508]: 2025-12-01 22:46:16.719 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:16 compute-0 podman[245978]: 2025-12-01 22:46:16.845496519 +0000 UTC m=+0.104834025 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:46:16 compute-0 podman[245977]: 2025-12-01 22:46:16.903135174 +0000 UTC m=+0.176988622 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:46:20 compute-0 nova_compute[189508]: 2025-12-01 22:46:20.073 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:21 compute-0 nova_compute[189508]: 2025-12-01 22:46:21.725 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:21 compute-0 podman[246024]: 2025-12-01 22:46:21.832939983 +0000 UTC m=+0.109044445 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:46:21 compute-0 podman[246026]: 2025-12-01 22:46:21.85082023 +0000 UTC m=+0.109772245 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec  1 22:46:21 compute-0 podman[246025]: 2025-12-01 22:46:21.854031811 +0000 UTC m=+0.123806433 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:46:21 compute-0 podman[246031]: 2025-12-01 22:46:21.855766121 +0000 UTC m=+0.101507201 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_id=edpm, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Dec  1 22:46:25 compute-0 nova_compute[189508]: 2025-12-01 22:46:25.076 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:26 compute-0 nova_compute[189508]: 2025-12-01 22:46:26.738 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:29 compute-0 podman[203693]: time="2025-12-01T22:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:46:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:46:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Dec  1 22:46:30 compute-0 nova_compute[189508]: 2025-12-01 22:46:30.083 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:31 compute-0 openstack_network_exporter[205887]: ERROR   22:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:46:31 compute-0 openstack_network_exporter[205887]: ERROR   22:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:46:31 compute-0 openstack_network_exporter[205887]: ERROR   22:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:46:31 compute-0 openstack_network_exporter[205887]: ERROR   22:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:46:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:46:31 compute-0 openstack_network_exporter[205887]: ERROR   22:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:46:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:46:31 compute-0 nova_compute[189508]: 2025-12-01 22:46:31.743 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:35 compute-0 nova_compute[189508]: 2025-12-01 22:46:35.087 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:35 compute-0 nova_compute[189508]: 2025-12-01 22:46:35.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:35 compute-0 nova_compute[189508]: 2025-12-01 22:46:35.197 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:36 compute-0 nova_compute[189508]: 2025-12-01 22:46:36.748 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:38 compute-0 podman[246104]: 2025-12-01 22:46:38.845244347 +0000 UTC m=+0.104476045 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:46:40 compute-0 nova_compute[189508]: 2025-12-01 22:46:40.089 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:40 compute-0 nova_compute[189508]: 2025-12-01 22:46:40.567 189512 DEBUG nova.compute.manager [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received event network-changed-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:46:40 compute-0 nova_compute[189508]: 2025-12-01 22:46:40.568 189512 DEBUG nova.compute.manager [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Refreshing instance network info cache due to event network-changed-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:46:40 compute-0 nova_compute[189508]: 2025-12-01 22:46:40.568 189512 DEBUG oslo_concurrency.lockutils [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:46:40 compute-0 nova_compute[189508]: 2025-12-01 22:46:40.569 189512 DEBUG oslo_concurrency.lockutils [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:46:40 compute-0 nova_compute[189508]: 2025-12-01 22:46:40.569 189512 DEBUG nova.network.neutron [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Refreshing network info cache for port 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.017 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.018 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.019 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.019 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.020 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.021 189512 INFO nova.compute.manager [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Terminating instance#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.022 189512 DEBUG nova.compute.manager [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:46:41 compute-0 kernel: tap7e734aeb-82 (unregistering): left promiscuous mode
Dec  1 22:46:41 compute-0 NetworkManager[56278]: <info>  [1764629201.0852] device (tap7e734aeb-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:46:41 compute-0 ovn_controller[97770]: 2025-12-01T22:46:41Z|00054|binding|INFO|Releasing lport 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 from this chassis (sb_readonly=0)
Dec  1 22:46:41 compute-0 ovn_controller[97770]: 2025-12-01T22:46:41Z|00055|binding|INFO|Setting lport 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 down in Southbound
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.104 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 ovn_controller[97770]: 2025-12-01T22:46:41Z|00056|binding|INFO|Removing iface tap7e734aeb-82 ovn-installed in OVS
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.110 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.116 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:6b:fb 192.168.0.11'], port_security=['fa:16:3e:b8:6b:fb 192.168.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-37pfkxggku2d-wifaxhcghats-izgcjuxscyy2-port-ncy6cathjcrw', 'neutron:cidrs': '192.168.0.11/24', 'neutron:device_id': '99b450eb-11ab-433d-9cf3-da58ea311e94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-37pfkxggku2d-wifaxhcghats-izgcjuxscyy2-port-ncy6cathjcrw', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.118 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c unbound from our chassis#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.120 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.139 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  1 22:46:41 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 42.301s CPU time.
Dec  1 22:46:41 compute-0 systemd-machined[155759]: Machine qemu-3-instance-00000003 terminated.
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.155 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[7370f5d2-0af0-43e9-becc-1f4d330550fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.207 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[2aab7c2e-852f-48e4-afa9-2f7ca97ea98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.212 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[b49a93a5-1c18-4128-ad2f-e1a7f9edf284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.254 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[8d789a37-43e2-44ba-96b4-c672f5562abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.260 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 podman[246131]: 2025-12-01 22:46:41.274801929 +0000 UTC m=+0.148093392 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.279 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2dc4f8-981c-43f5-86ab-c0369c070f32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 22904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246164, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.303 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7ddb32-6e91-428d-ae80-f36eb22bdc6f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384779, 'tstamp': 384779}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246172, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384784, 'tstamp': 384784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246172, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.305 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.313 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.321 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd6e3c27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.321 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.322 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.322 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd6e3c27-10, col_values=(('external_ids', {'iface-id': 'e303b09b-4673-4950-aa2d-91085a5bc5f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.322 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.353 189512 INFO nova.virt.libvirt.driver [-] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Instance destroyed successfully.#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.354 189512 DEBUG nova.objects.instance [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'resources' on Instance uuid 99b450eb-11ab-433d-9cf3-da58ea311e94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.389 189512 DEBUG nova.virt.libvirt.vif [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-wifaxhcghats-izgcjuxscyy2-vnf-fyan4lptzpzi',id=3,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:39:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-8cy17cl9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:39:03Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDI0ODY2MTE2OTEwMTM1NDQzMz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 22:46:41 compute-0 nova_compute[189508]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDI0ODY2MTE2OTEwMTM1NDQzMz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyNDg2NjExNjkxMDEzNTQ0MzM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjQ4NjYxMTY5MTAxMzU0NDMzPT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=99b450eb-11ab-433d-9cf3-da58ea311e94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.393 189512 DEBUG nova.network.os_vif_util [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.395 189512 DEBUG nova.network.os_vif_util [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:6b:fb,bridge_name='br-int',has_traffic_filtering=True,id=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7e734aeb-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.395 189512 DEBUG os_vif [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:6b:fb,bridge_name='br-int',has_traffic_filtering=True,id=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7e734aeb-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.398 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.398 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e734aeb-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.401 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.404 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.410 189512 INFO os_vif [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:6b:fb,bridge_name='br-int',has_traffic_filtering=True,id=7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7e734aeb-82')#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.412 189512 INFO nova.virt.libvirt.driver [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Deleting instance files /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94_del#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.412 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:46:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:41.413 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.414 189512 INFO nova.virt.libvirt.driver [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Deletion of /var/lib/nova/instances/99b450eb-11ab-433d-9cf3-da58ea311e94_del complete#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.420 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.480 189512 INFO nova.compute.manager [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.481 189512 DEBUG oslo.service.loopingcall [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.482 189512 DEBUG nova.compute.manager [-] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:46:41 compute-0 nova_compute[189508]: 2025-12-01 22:46:41.483 189512 DEBUG nova.network.neutron [-] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:46:41 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:46:41.389 189512 DEBUG nova.virt.libvirt.vif [None req-178d714d-b8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.114 189512 DEBUG nova.network.neutron [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updated VIF entry in instance network info cache for port 7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.114 189512 DEBUG nova.network.neutron [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updating instance_info_cache with network_info: [{"id": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "address": "fa:16:3e:b8:6b:fb", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e734aeb-82", "ovs_interfaceid": "7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.142 189512 DEBUG oslo_concurrency.lockutils [req-322ee3da-0153-4b4d-84ca-ce6ed3692fc1 req-b0c62999-686c-401c-8789-712445983615 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-99b450eb-11ab-433d-9cf3-da58ea311e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.880 189512 DEBUG nova.compute.manager [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received event network-vif-unplugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.881 189512 DEBUG oslo_concurrency.lockutils [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.882 189512 DEBUG oslo_concurrency.lockutils [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.882 189512 DEBUG oslo_concurrency.lockutils [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.883 189512 DEBUG nova.compute.manager [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] No waiting events found dispatching network-vif-unplugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.883 189512 DEBUG nova.compute.manager [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received event network-vif-unplugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.883 189512 DEBUG nova.compute.manager [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received event network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.884 189512 DEBUG oslo_concurrency.lockutils [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.884 189512 DEBUG oslo_concurrency.lockutils [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.885 189512 DEBUG oslo_concurrency.lockutils [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.885 189512 DEBUG nova.compute.manager [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] No waiting events found dispatching network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:46:42 compute-0 nova_compute[189508]: 2025-12-01 22:46:42.885 189512 WARNING nova.compute.manager [req-a7af6fc2-adf2-4962-acc5-9fc139899e57 req-8e2203f4-db39-42fa-b5d2-01d70d1774ee c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Received unexpected event network-vif-plugged-7e734aeb-82ae-472a-8e14-bc9e2cf8dbf3 for instance with vm_state active and task_state deleting.#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.034 189512 DEBUG nova.network.neutron [-] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.052 189512 INFO nova.compute.manager [-] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Took 1.57 seconds to deallocate network for instance.#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.113 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.114 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.235 189512 DEBUG nova.compute.provider_tree [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.258 189512 DEBUG nova.scheduler.client.report [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.282 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.311 189512 INFO nova.scheduler.client.report [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Deleted allocations for instance 99b450eb-11ab-433d-9cf3-da58ea311e94#033[00m
Dec  1 22:46:43 compute-0 nova_compute[189508]: 2025-12-01 22:46:43.389 189512 DEBUG oslo_concurrency.lockutils [None req-178d714d-b8c5-44cd-b6a2-ada36737103d 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "99b450eb-11ab-433d-9cf3-da58ea311e94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:43 compute-0 podman[246183]: 2025-12-01 22:46:43.864851874 +0000 UTC m=+0.141617779 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:46:44 compute-0 nova_compute[189508]: 2025-12-01 22:46:44.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:44 compute-0 nova_compute[189508]: 2025-12-01 22:46:44.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:46:44 compute-0 nova_compute[189508]: 2025-12-01 22:46:44.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:46:44 compute-0 nova_compute[189508]: 2025-12-01 22:46:44.383 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:46:44 compute-0 nova_compute[189508]: 2025-12-01 22:46:44.384 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:46:44 compute-0 nova_compute[189508]: 2025-12-01 22:46:44.384 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:46:44 compute-0 nova_compute[189508]: 2025-12-01 22:46:44.384 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:46:45 compute-0 nova_compute[189508]: 2025-12-01 22:46:45.094 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.400 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.730 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.752 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.753 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.754 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.754 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.784 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.784 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.785 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.785 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:46:46 compute-0 nova_compute[189508]: 2025-12-01 22:46:46.924 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.034 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.037 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.106 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.108 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.204 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.209 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.281 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.293 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.389 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.391 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.492 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.493 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.555 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.556 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:46:47 compute-0 nova_compute[189508]: 2025-12-01 22:46:47.625 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:46:47 compute-0 podman[246230]: 2025-12-01 22:46:47.877604644 +0000 UTC m=+0.139754076 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  1 22:46:47 compute-0 podman[246229]: 2025-12-01 22:46:47.91306535 +0000 UTC m=+0.192739139 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.075 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.078 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4951MB free_disk=72.17876815795898GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.079 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.079 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.194 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.195 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.196 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.196 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.287 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.306 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.337 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.338 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:46:48 compute-0 nova_compute[189508]: 2025-12-01 22:46:48.783 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:49 compute-0 nova_compute[189508]: 2025-12-01 22:46:49.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:46:50 compute-0 nova_compute[189508]: 2025-12-01 22:46:50.098 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:51 compute-0 nova_compute[189508]: 2025-12-01 22:46:51.403 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:46:51.414 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:46:52 compute-0 podman[246275]: 2025-12-01 22:46:52.810905511 +0000 UTC m=+0.086189256 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 22:46:52 compute-0 podman[246277]: 2025-12-01 22:46:52.815178622 +0000 UTC m=+0.086442423 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 22:46:52 compute-0 podman[246274]: 2025-12-01 22:46:52.842136867 +0000 UTC m=+0.116017973 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:46:52 compute-0 podman[246276]: 2025-12-01 22:46:52.851659187 +0000 UTC m=+0.127808237 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec  1 22:46:55 compute-0 nova_compute[189508]: 2025-12-01 22:46:55.103 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:56 compute-0 nova_compute[189508]: 2025-12-01 22:46:56.348 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629201.3464048, 99b450eb-11ab-433d-9cf3-da58ea311e94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:46:56 compute-0 nova_compute[189508]: 2025-12-01 22:46:56.349 189512 INFO nova.compute.manager [-] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:46:56 compute-0 nova_compute[189508]: 2025-12-01 22:46:56.385 189512 DEBUG nova.compute.manager [None req-0d51d7a8-99b4-4c40-85c1-4cf47add790a - - - - - -] [instance: 99b450eb-11ab-433d-9cf3-da58ea311e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:46:56 compute-0 nova_compute[189508]: 2025-12-01 22:46:56.407 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:46:59 compute-0 podman[203693]: time="2025-12-01T22:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:46:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:46:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 22:47:00 compute-0 nova_compute[189508]: 2025-12-01 22:47:00.107 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:01 compute-0 nova_compute[189508]: 2025-12-01 22:47:01.411 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:01 compute-0 openstack_network_exporter[205887]: ERROR   22:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:47:01 compute-0 openstack_network_exporter[205887]: ERROR   22:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:47:01 compute-0 openstack_network_exporter[205887]: ERROR   22:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:47:01 compute-0 openstack_network_exporter[205887]: ERROR   22:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:47:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:47:01 compute-0 openstack_network_exporter[205887]: ERROR   22:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:47:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:47:01 compute-0 systemd-logind[788]: New session 29 of user zuul.
Dec  1 22:47:01 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec  1 22:47:02 compute-0 python3[246532]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:47:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:47:04.626 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:47:04.629 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:47:04.633 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:05 compute-0 nova_compute[189508]: 2025-12-01 22:47:05.111 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:06 compute-0 nova_compute[189508]: 2025-12-01 22:47:06.414 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:09 compute-0 podman[246572]: 2025-12-01 22:47:09.874432079 +0000 UTC m=+0.129909117 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:47:10 compute-0 nova_compute[189508]: 2025-12-01 22:47:10.114 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:11 compute-0 nova_compute[189508]: 2025-12-01 22:47:11.417 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:11 compute-0 podman[246594]: 2025-12-01 22:47:11.867829805 +0000 UTC m=+0.133393326 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 22:47:14 compute-0 podman[246615]: 2025-12-01 22:47:14.84134836 +0000 UTC m=+0.119735189 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  1 22:47:15 compute-0 nova_compute[189508]: 2025-12-01 22:47:15.117 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:15 compute-0 ovn_controller[97770]: 2025-12-01T22:47:15Z|00057|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Dec  1 22:47:16 compute-0 nova_compute[189508]: 2025-12-01 22:47:16.420 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:18 compute-0 podman[246636]: 2025-12-01 22:47:18.845065993 +0000 UTC m=+0.104554487 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 22:47:18 compute-0 podman[246635]: 2025-12-01 22:47:18.896195334 +0000 UTC m=+0.176767746 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  1 22:47:18 compute-0 nova_compute[189508]: 2025-12-01 22:47:18.962 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "3d3d4510-c787-4867-9d43-bb62dd22410f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:18 compute-0 nova_compute[189508]: 2025-12-01 22:47:18.963 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "3d3d4510-c787-4867-9d43-bb62dd22410f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:18 compute-0 nova_compute[189508]: 2025-12-01 22:47:18.990 189512 DEBUG nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.141 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.143 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.159 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.161 189512 INFO nova.compute.claims [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.344 189512 DEBUG nova.compute.provider_tree [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.360 189512 DEBUG nova.scheduler.client.report [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.383 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.385 189512 DEBUG nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.434 189512 DEBUG nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.453 189512 INFO nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.495 189512 DEBUG nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.586 189512 DEBUG nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.588 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.588 189512 INFO nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Creating image(s)#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.589 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.589 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.590 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.590 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:19 compute-0 nova_compute[189508]: 2025-12-01 22:47:19.590 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:20 compute-0 nova_compute[189508]: 2025-12-01 22:47:20.119 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.022 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.099 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.part --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.101 189512 DEBUG nova.virt.images [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.103 189512 DEBUG nova.privsep.utils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.104 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.part /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.338 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.part /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.converted" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.347 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.422 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.443 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781.converted --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.445 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.471 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.569 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.571 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.573 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.597 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.691 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.694 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781,backing_fmt=raw /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.752 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781,backing_fmt=raw /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk 1073741824" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.754 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.755 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.856 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.859 189512 DEBUG nova.virt.disk.api [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Checking if we can resize image /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.861 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.966 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.968 189512 DEBUG nova.virt.disk.api [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Cannot resize image /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.969 189512 DEBUG nova.objects.instance [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d3d4510-c787-4867-9d43-bb62dd22410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.988 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.990 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:21 compute-0 nova_compute[189508]: 2025-12-01 22:47:21.992 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.021 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.110 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.112 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.113 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.132 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.221 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.223 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.271 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.eph0 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.274 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.275 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.343 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.346 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.348 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Ensure instance console log exists: /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.350 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.351 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.352 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.358 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:47:06Z,direct_url=<?>,disk_format='qcow2',id=e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7,min_disk=0,min_ram=0,name='fvt_testing_image',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:47:11Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7'}], 'ephemerals': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'size': 1, 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.369 189512 WARNING nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.378 189512 DEBUG nova.virt.libvirt.host [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.378 189512 DEBUG nova.virt.libvirt.host [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.384 189512 DEBUG nova.virt.libvirt.host [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.384 189512 DEBUG nova.virt.libvirt.host [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.384 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.385 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:47:14Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='ea58e288-9a46-4884-9c9b-65a3f1e5bc49',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T22:47:06Z,direct_url=<?>,disk_format='qcow2',id=e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7,min_disk=0,min_ram=0,name='fvt_testing_image',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T22:47:11Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.385 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.386 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.386 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.386 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.386 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.387 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.387 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.387 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.388 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.388 189512 DEBUG nova.virt.hardware [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.392 189512 DEBUG nova.objects.instance [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d3d4510-c787-4867-9d43-bb62dd22410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.423 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <uuid>3d3d4510-c787-4867-9d43-bb62dd22410f</uuid>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <name>instance-00000005</name>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <memory>524288</memory>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <nova:name>fvt_testing_server</nova:name>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:47:22</nova:creationTime>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <nova:flavor name="fvt_testing_flavor">
Dec  1 22:47:22 compute-0 nova_compute[189508]:        <nova:memory>512</nova:memory>
Dec  1 22:47:22 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:47:22 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:47:22 compute-0 nova_compute[189508]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 22:47:22 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:47:22 compute-0 nova_compute[189508]:        <nova:user uuid="3b810e864d6c4d058e539f62ad181096">admin</nova:user>
Dec  1 22:47:22 compute-0 nova_compute[189508]:        <nova:project uuid="af2fbf0e1b5f40c19aed69d241db7727">admin</nova:project>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <nova:ports/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <system>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <entry name="serial">3d3d4510-c787-4867-9d43-bb62dd22410f</entry>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <entry name="uuid">3d3d4510-c787-4867-9d43-bb62dd22410f</entry>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </system>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <os>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  </os>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <features>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  </features>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.eph0"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <target dev="vdb" bus="virtio"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.config"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/console.log" append="off"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <video>
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </video>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:47:22 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:47:22 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:47:22 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:47:22 compute-0 nova_compute[189508]: </domain>
Dec  1 22:47:22 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.511 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.512 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.512 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:47:22 compute-0 nova_compute[189508]: 2025-12-01 22:47:22.513 189512 INFO nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Using config drive#033[00m
Dec  1 22:47:23 compute-0 nova_compute[189508]: 2025-12-01 22:47:23.502 189512 INFO nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Creating config drive at /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.config#033[00m
Dec  1 22:47:23 compute-0 nova_compute[189508]: 2025-12-01 22:47:23.510 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22ag5tpw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:23 compute-0 nova_compute[189508]: 2025-12-01 22:47:23.646 189512 DEBUG oslo_concurrency.processutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22ag5tpw" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:23 compute-0 systemd-machined[155759]: New machine qemu-5-instance-00000005.
Dec  1 22:47:23 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  1 22:47:23 compute-0 podman[246729]: 2025-12-01 22:47:23.853193704 +0000 UTC m=+0.110794214 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:47:23 compute-0 podman[246731]: 2025-12-01 22:47:23.890643567 +0000 UTC m=+0.132599743 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:47:23 compute-0 podman[246733]: 2025-12-01 22:47:23.893463597 +0000 UTC m=+0.131553924 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 22:47:23 compute-0 podman[246732]: 2025-12-01 22:47:23.898242482 +0000 UTC m=+0.148030550 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Dec  1 22:47:24 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 22:47:24 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.295 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629244.29417, 3d3d4510-c787-4867-9d43-bb62dd22410f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.300 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.305 189512 DEBUG nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.306 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.318 189512 INFO nova.virt.libvirt.driver [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Instance spawned successfully.#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.320 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.338 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.355 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.370 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.370 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.371 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.371 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.372 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.372 189512 DEBUG nova.virt.libvirt.driver [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.418 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.419 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629244.2981627, 3d3d4510-c787-4867-9d43-bb62dd22410f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.419 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] VM Started (Lifecycle Event)#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.460 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.469 189512 INFO nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Took 4.88 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.470 189512 DEBUG nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.477 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.517 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.564 189512 INFO nova.compute.manager [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Took 5.49 seconds to build instance.#033[00m
Dec  1 22:47:24 compute-0 nova_compute[189508]: 2025-12-01 22:47:24.601 189512 DEBUG oslo_concurrency.lockutils [None req-05af9d27-547b-4e83-bc42-54fe1c822135 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "3d3d4510-c787-4867-9d43-bb62dd22410f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:25 compute-0 nova_compute[189508]: 2025-12-01 22:47:25.122 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:26 compute-0 nova_compute[189508]: 2025-12-01 22:47:26.427 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:29 compute-0 podman[203693]: time="2025-12-01T22:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:47:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:47:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 22:47:30 compute-0 nova_compute[189508]: 2025-12-01 22:47:30.126 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:31 compute-0 openstack_network_exporter[205887]: ERROR   22:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:47:31 compute-0 openstack_network_exporter[205887]: ERROR   22:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:47:31 compute-0 openstack_network_exporter[205887]: ERROR   22:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:47:31 compute-0 nova_compute[189508]: 2025-12-01 22:47:31.431 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:31 compute-0 openstack_network_exporter[205887]: ERROR   22:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:47:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:47:31 compute-0 openstack_network_exporter[205887]: ERROR   22:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:47:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:47:35 compute-0 nova_compute[189508]: 2025-12-01 22:47:35.129 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.270 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.271 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.271 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.279 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.283 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dae82663-6de4-4397-8aab-9559ddeaec24', 'name': 'vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.286 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3d3d4510-c787-4867-9d43-bb62dd22410f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:47:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:35.287 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3d3d4510-c787-4867-9d43-bb62dd22410f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:47:36 compute-0 nova_compute[189508]: 2025-12-01 22:47:36.435 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.655 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Mon, 01 Dec 2025 22:47:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2b33227e-e7fd-4cf9-86ba-e6e5f0fed662 x-openstack-request-id: req-2b33227e-e7fd-4cf9-86ba-e6e5f0fed662 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.655 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3d3d4510-c787-4867-9d43-bb62dd22410f", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "user_id": "3b810e864d6c4d058e539f62ad181096", "metadata": {}, "hostId": "968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d", "image": {"id": "e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7"}]}, "flavor": {"id": "ea58e288-9a46-4884-9c9b-65a3f1e5bc49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ea58e288-9a46-4884-9c9b-65a3f1e5bc49"}]}, "created": "2025-12-01T22:47:18Z", "updated": "2025-12-01T22:47:24Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3d3d4510-c787-4867-9d43-bb62dd22410f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3d3d4510-c787-4867-9d43-bb62dd22410f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T22:47:24.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.655 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3d3d4510-c787-4867-9d43-bb62dd22410f used request id req-2b33227e-e7fd-4cf9-86ba-e6e5f0fed662 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.656 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3d3d4510-c787-4867-9d43-bb62dd22410f', 'name': 'fvt_testing_server', 'flavor': {'id': 'ea58e288-9a46-4884-9c9b-65a3f1e5bc49', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e6ecd5c0-c4a6-45e6-8976-24c6f0744fe7'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:47:36.657340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.663 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.667 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.671 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.671 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.672 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.672 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.672 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.673 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:47:36.671125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:47:36.672318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:47:36.673631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.712 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.713 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.713 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.750 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.751 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.752 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.786 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.787 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.787 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.788 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.789 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.789 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.790 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:47:36.790032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.901 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.902 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.902 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.995 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.996 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:36.996 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.064 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.064 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.064 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.065 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.066 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.066 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:47:37.065915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.066 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.066 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 529113669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.067 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 125664984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.067 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 99600138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.067 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.latency volume: 345228949 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.067 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.067 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.latency volume: 1364279 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.068 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.069 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:47:37.068688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.069 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.069 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.069 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.070 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.070 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.070 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.070 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.071 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.071 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.071 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.071 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.072 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.072 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:47:37.071155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.072 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.072 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.072 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.073 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.074 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:47:37.073773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.074 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.074 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.074 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.074 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.075 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.075 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.075 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:47:37.076224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.076 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.077 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.077 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.077 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.077 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.077 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.077 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.078 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.078 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.079 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:47:37.078733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.079 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.079 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 1954219616 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.079 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 13544625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.079 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.080 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.080 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.080 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.081 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:47:37.081413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.105 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 45390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.127 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/cpu volume: 38740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.168 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/cpu volume: 12100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.170 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:47:37.171105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.171 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.172 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.172 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.173 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.174 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.174 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:47:37.174475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.175 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.175 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.176 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.176 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:47:37.176079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.177 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.177 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.177 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.177 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.178 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.178 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.178 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.180 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T22:47:37.179823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.180 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.183 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:47:37.182195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.184 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:47:37.184428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.185 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.185 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.186 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:47:37.186503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.187 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.188 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.188 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:47:37.188250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.190 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.190 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:47:37.190362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.190 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:47:37.191891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.192 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.192 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.193 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.193 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.194 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:47:37.193696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.195 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 nova_compute[189508]: 2025-12-01 22:47:37.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.195 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.75390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.195 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.196 14 DEBUG ceilometer.compute.pollsters [-] 3d3d4510-c787-4867-9d43-bb62dd22410f/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.196 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 3d3d4510-c787-4867-9d43-bb62dd22410f: ceilometer.compute.pollsters.NoVolumeException
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.196 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:47:37.195275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.197 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T22:47:37.197548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.198 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.199 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:47:37.198768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:47:37.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:47:40 compute-0 nova_compute[189508]: 2025-12-01 22:47:40.131 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:40 compute-0 podman[246849]: 2025-12-01 22:47:40.860899317 +0000 UTC m=+0.127623162 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:47:41 compute-0 nova_compute[189508]: 2025-12-01 22:47:41.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:41 compute-0 nova_compute[189508]: 2025-12-01 22:47:41.438 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:42 compute-0 nova_compute[189508]: 2025-12-01 22:47:42.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:42 compute-0 podman[246873]: 2025-12-01 22:47:42.861145198 +0000 UTC m=+0.127714385 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.225 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.666 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "3d3d4510-c787-4867-9d43-bb62dd22410f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.667 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "3d3d4510-c787-4867-9d43-bb62dd22410f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.668 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "3d3d4510-c787-4867-9d43-bb62dd22410f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.668 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "3d3d4510-c787-4867-9d43-bb62dd22410f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.669 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "3d3d4510-c787-4867-9d43-bb62dd22410f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.670 189512 INFO nova.compute.manager [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Terminating instance#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.672 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "refresh_cache-3d3d4510-c787-4867-9d43-bb62dd22410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.672 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquired lock "refresh_cache-3d3d4510-c787-4867-9d43-bb62dd22410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.673 189512 DEBUG nova.network.neutron [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:47:43 compute-0 nova_compute[189508]: 2025-12-01 22:47:43.800 189512 DEBUG nova.network.neutron [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.598 189512 DEBUG nova.network.neutron [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.624 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Releasing lock "refresh_cache-3d3d4510-c787-4867-9d43-bb62dd22410f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.626 189512 DEBUG nova.compute.manager [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:47:44 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  1 22:47:44 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.983s CPU time.
Dec  1 22:47:44 compute-0 systemd-machined[155759]: Machine qemu-5-instance-00000005 terminated.
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.954 189512 INFO nova.virt.libvirt.driver [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Instance destroyed successfully.#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.955 189512 DEBUG nova.objects.instance [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'resources' on Instance uuid 3d3d4510-c787-4867-9d43-bb62dd22410f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.970 189512 INFO nova.virt.libvirt.driver [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Deleting instance files /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f_del#033[00m
Dec  1 22:47:44 compute-0 nova_compute[189508]: 2025-12-01 22:47:44.972 189512 INFO nova.virt.libvirt.driver [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Deletion of /var/lib/nova/instances/3d3d4510-c787-4867-9d43-bb62dd22410f_del complete#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.041 189512 INFO nova.compute.manager [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.042 189512 DEBUG oslo.service.loopingcall [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.043 189512 DEBUG nova.compute.manager [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.043 189512 DEBUG nova.network.neutron [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.135 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.518 189512 DEBUG nova.network.neutron [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.540 189512 DEBUG nova.network.neutron [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.557 189512 INFO nova.compute.manager [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Took 0.51 seconds to deallocate network for instance.#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.613 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.614 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.800 189512 DEBUG nova.scheduler.client.report [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 22:47:45 compute-0 podman[246906]: 2025-12-01 22:47:45.873978089 +0000 UTC m=+0.122958079 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.937 189512 DEBUG nova.scheduler.client.report [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 22:47:45 compute-0 nova_compute[189508]: 2025-12-01 22:47:45.938 189512 DEBUG nova.compute.provider_tree [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.021 189512 DEBUG nova.scheduler.client.report [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.063 189512 DEBUG nova.scheduler.client.report [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.178 189512 DEBUG nova.compute.provider_tree [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.203 189512 DEBUG nova.scheduler.client.report [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.257 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.291 189512 INFO nova.scheduler.client.report [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Deleted allocations for instance 3d3d4510-c787-4867-9d43-bb62dd22410f#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.390 189512 DEBUG oslo_concurrency.lockutils [None req-3e50507c-2cd7-489c-89a8-c9fcf0838f50 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "3d3d4510-c787-4867-9d43-bb62dd22410f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.442 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.632 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.633 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:47:46 compute-0 nova_compute[189508]: 2025-12-01 22:47:46.635 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.902 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updating instance_info_cache with network_info: [{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.923 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.924 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.925 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.926 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.955 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.956 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.956 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:47 compute-0 nova_compute[189508]: 2025-12-01 22:47:47.956 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.086 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.191 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.194 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.286 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.288 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.358 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.359 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.428 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.444 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.523 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.525 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.628 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.630 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.730 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.732 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:47:48 compute-0 nova_compute[189508]: 2025-12-01 22:47:48.802 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.363 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.364 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4870MB free_disk=72.1509017944336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.364 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.365 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.476 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.477 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.477 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.477 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.547 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.561 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.584 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.585 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.585 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.585 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 22:47:49 compute-0 nova_compute[189508]: 2025-12-01 22:47:49.597 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 22:47:49 compute-0 podman[246953]: 2025-12-01 22:47:49.909082607 +0000 UTC m=+0.167631494 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 22:47:49 compute-0 podman[246952]: 2025-12-01 22:47:49.943028558 +0000 UTC m=+0.205212367 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 22:47:50 compute-0 nova_compute[189508]: 2025-12-01 22:47:50.139 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:50 compute-0 nova_compute[189508]: 2025-12-01 22:47:50.870 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:50 compute-0 nova_compute[189508]: 2025-12-01 22:47:50.871 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:51 compute-0 nova_compute[189508]: 2025-12-01 22:47:51.445 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:52 compute-0 nova_compute[189508]: 2025-12-01 22:47:52.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:47:52 compute-0 nova_compute[189508]: 2025-12-01 22:47:52.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 22:47:54 compute-0 podman[246996]: 2025-12-01 22:47:54.851446907 +0000 UTC m=+0.121805278 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:47:54 compute-0 podman[246998]: 2025-12-01 22:47:54.853052822 +0000 UTC m=+0.113902344 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec  1 22:47:54 compute-0 podman[246997]: 2025-12-01 22:47:54.859777852 +0000 UTC m=+0.136252656 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  1 22:47:54 compute-0 podman[246999]: 2025-12-01 22:47:54.861904582 +0000 UTC m=+0.115119158 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 22:47:55 compute-0 nova_compute[189508]: 2025-12-01 22:47:55.143 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:56 compute-0 nova_compute[189508]: 2025-12-01 22:47:56.448 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:47:59 compute-0 podman[203693]: time="2025-12-01T22:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:47:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:47:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  1 22:47:59 compute-0 nova_compute[189508]: 2025-12-01 22:47:59.949 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629264.9478927, 3d3d4510-c787-4867-9d43-bb62dd22410f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:47:59 compute-0 nova_compute[189508]: 2025-12-01 22:47:59.950 189512 INFO nova.compute.manager [-] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:47:59 compute-0 nova_compute[189508]: 2025-12-01 22:47:59.969 189512 DEBUG nova.compute.manager [None req-6ef4f4eb-3fe5-43b1-b408-1571b3cb9f83 - - - - - -] [instance: 3d3d4510-c787-4867-9d43-bb62dd22410f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:48:00 compute-0 nova_compute[189508]: 2025-12-01 22:48:00.146 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:01 compute-0 openstack_network_exporter[205887]: ERROR   22:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:48:01 compute-0 openstack_network_exporter[205887]: ERROR   22:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:48:01 compute-0 openstack_network_exporter[205887]: ERROR   22:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:48:01 compute-0 openstack_network_exporter[205887]: ERROR   22:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:48:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:48:01 compute-0 openstack_network_exporter[205887]: ERROR   22:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:48:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:48:01 compute-0 nova_compute[189508]: 2025-12-01 22:48:01.450 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:02 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec  1 22:48:02 compute-0 systemd[1]: session-29.scope: Consumed 1.414s CPU time.
Dec  1 22:48:02 compute-0 systemd-logind[788]: Session 29 logged out. Waiting for processes to exit.
Dec  1 22:48:02 compute-0 systemd-logind[788]: Removed session 29.
Dec  1 22:48:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:48:04.627 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:48:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:48:04.628 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:48:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:48:04.629 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:48:05 compute-0 nova_compute[189508]: 2025-12-01 22:48:05.149 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:06 compute-0 nova_compute[189508]: 2025-12-01 22:48:06.456 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:10 compute-0 nova_compute[189508]: 2025-12-01 22:48:10.152 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:11 compute-0 nova_compute[189508]: 2025-12-01 22:48:11.460 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:11 compute-0 podman[247074]: 2025-12-01 22:48:11.836277406 +0000 UTC m=+0.114610274 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:48:13 compute-0 podman[247098]: 2025-12-01 22:48:13.840787184 +0000 UTC m=+0.109120278 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:48:15 compute-0 nova_compute[189508]: 2025-12-01 22:48:15.156 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:16 compute-0 nova_compute[189508]: 2025-12-01 22:48:16.462 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:16 compute-0 podman[247119]: 2025-12-01 22:48:16.828816277 +0000 UTC m=+0.101181654 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 22:48:20 compute-0 nova_compute[189508]: 2025-12-01 22:48:20.158 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:20 compute-0 podman[247140]: 2025-12-01 22:48:20.856207409 +0000 UTC m=+0.115107378 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 22:48:20 compute-0 podman[247139]: 2025-12-01 22:48:20.936948473 +0000 UTC m=+0.205764593 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 22:48:21 compute-0 systemd-logind[788]: New session 30 of user zuul.
Dec  1 22:48:21 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec  1 22:48:21 compute-0 nova_compute[189508]: 2025-12-01 22:48:21.465 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:22 compute-0 python3[247359]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:48:25 compute-0 nova_compute[189508]: 2025-12-01 22:48:25.162 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:25 compute-0 podman[247398]: 2025-12-01 22:48:25.864217415 +0000 UTC m=+0.115543641 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:48:25 compute-0 podman[247397]: 2025-12-01 22:48:25.877047687 +0000 UTC m=+0.140918428 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 22:48:25 compute-0 podman[247399]: 2025-12-01 22:48:25.878672554 +0000 UTC m=+0.120227743 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, version=9.6, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 22:48:25 compute-0 podman[247405]: 2025-12-01 22:48:25.888840661 +0000 UTC m=+0.117664020 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git)
Dec  1 22:48:26 compute-0 nova_compute[189508]: 2025-12-01 22:48:26.469 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:29 compute-0 podman[203693]: time="2025-12-01T22:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:48:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:48:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec  1 22:48:30 compute-0 nova_compute[189508]: 2025-12-01 22:48:30.165 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:30 compute-0 python3[247650]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:48:31 compute-0 openstack_network_exporter[205887]: ERROR   22:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:48:31 compute-0 openstack_network_exporter[205887]: ERROR   22:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:48:31 compute-0 openstack_network_exporter[205887]: ERROR   22:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:48:31 compute-0 openstack_network_exporter[205887]: ERROR   22:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:48:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:48:31 compute-0 openstack_network_exporter[205887]: ERROR   22:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:48:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:48:31 compute-0 nova_compute[189508]: 2025-12-01 22:48:31.473 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:35 compute-0 nova_compute[189508]: 2025-12-01 22:48:35.168 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:36 compute-0 nova_compute[189508]: 2025-12-01 22:48:36.476 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:39 compute-0 nova_compute[189508]: 2025-12-01 22:48:39.219 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:39 compute-0 nova_compute[189508]: 2025-12-01 22:48:39.221 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:40 compute-0 nova_compute[189508]: 2025-12-01 22:48:40.174 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:41 compute-0 nova_compute[189508]: 2025-12-01 22:48:41.480 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:41 compute-0 python3[247865]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:48:42 compute-0 nova_compute[189508]: 2025-12-01 22:48:42.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:42 compute-0 podman[247902]: 2025-12-01 22:48:42.850659568 +0000 UTC m=+0.114995335 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:48:43 compute-0 nova_compute[189508]: 2025-12-01 22:48:43.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:44 compute-0 podman[247927]: 2025-12-01 22:48:44.852375504 +0000 UTC m=+0.123625919 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 22:48:45 compute-0 nova_compute[189508]: 2025-12-01 22:48:45.177 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:45 compute-0 nova_compute[189508]: 2025-12-01 22:48:45.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:45 compute-0 nova_compute[189508]: 2025-12-01 22:48:45.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:48:46 compute-0 nova_compute[189508]: 2025-12-01 22:48:46.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:46 compute-0 nova_compute[189508]: 2025-12-01 22:48:46.482 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:47 compute-0 podman[247947]: 2025-12-01 22:48:47.899450698 +0000 UTC m=+0.170021251 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  1 22:48:48 compute-0 nova_compute[189508]: 2025-12-01 22:48:48.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:48 compute-0 nova_compute[189508]: 2025-12-01 22:48:48.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:48:48 compute-0 nova_compute[189508]: 2025-12-01 22:48:48.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:48:48 compute-0 nova_compute[189508]: 2025-12-01 22:48:48.553 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:48:48 compute-0 nova_compute[189508]: 2025-12-01 22:48:48.554 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:48:48 compute-0 nova_compute[189508]: 2025-12-01 22:48:48.555 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:48:48 compute-0 nova_compute[189508]: 2025-12-01 22:48:48.555 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:48:50 compute-0 nova_compute[189508]: 2025-12-01 22:48:50.179 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:50 compute-0 nova_compute[189508]: 2025-12-01 22:48:50.989 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.019 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.020 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.022 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.023 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.055 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.056 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.056 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.056 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:48:51 compute-0 rsyslogd[236992]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.180 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.273 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.275 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.372 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.374 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.443 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.446 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.486 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.510 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.518 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.583 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.584 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.659 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.660 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.733 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.735 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:48:51 compute-0 podman[247988]: 2025-12-01 22:48:51.813047379 +0000 UTC m=+0.084840302 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:48:51 compute-0 nova_compute[189508]: 2025-12-01 22:48:51.826 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:48:51 compute-0 podman[247987]: 2025-12-01 22:48:51.85905682 +0000 UTC m=+0.129470244 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.360 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.364 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4878MB free_disk=72.1509017944336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.364 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.365 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.477 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.478 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.479 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.479 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.613 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.636 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.640 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.641 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:48:52 compute-0 nova_compute[189508]: 2025-12-01 22:48:52.817 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:48:55 compute-0 nova_compute[189508]: 2025-12-01 22:48:55.182 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:56 compute-0 nova_compute[189508]: 2025-12-01 22:48:56.489 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:48:56 compute-0 podman[248132]: 2025-12-01 22:48:56.821099137 +0000 UTC m=+0.096230604 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:48:56 compute-0 podman[248138]: 2025-12-01 22:48:56.825173232 +0000 UTC m=+0.091250252 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4)
Dec  1 22:48:56 compute-0 podman[248137]: 2025-12-01 22:48:56.847913926 +0000 UTC m=+0.107448071 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec  1 22:48:56 compute-0 podman[248135]: 2025-12-01 22:48:56.871637757 +0000 UTC m=+0.136677628 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:48:57 compute-0 python3[248290]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 22:48:59 compute-0 podman[203693]: time="2025-12-01T22:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:48:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:48:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 22:49:00 compute-0 nova_compute[189508]: 2025-12-01 22:49:00.185 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:01 compute-0 openstack_network_exporter[205887]: ERROR   22:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:49:01 compute-0 openstack_network_exporter[205887]: ERROR   22:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:49:01 compute-0 openstack_network_exporter[205887]: ERROR   22:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:49:01 compute-0 openstack_network_exporter[205887]: ERROR   22:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:49:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:49:01 compute-0 openstack_network_exporter[205887]: ERROR   22:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:49:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:49:01 compute-0 nova_compute[189508]: 2025-12-01 22:49:01.492 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:49:04.628 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:49:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:49:04.629 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:49:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:49:04.631 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:49:05 compute-0 nova_compute[189508]: 2025-12-01 22:49:05.189 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:06 compute-0 nova_compute[189508]: 2025-12-01 22:49:06.496 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:10 compute-0 nova_compute[189508]: 2025-12-01 22:49:10.193 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:11 compute-0 nova_compute[189508]: 2025-12-01 22:49:11.499 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:13 compute-0 podman[248329]: 2025-12-01 22:49:13.835267559 +0000 UTC m=+0.100220977 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 22:49:15 compute-0 nova_compute[189508]: 2025-12-01 22:49:15.195 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:15 compute-0 podman[248352]: 2025-12-01 22:49:15.893109463 +0000 UTC m=+0.157589290 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:49:16 compute-0 nova_compute[189508]: 2025-12-01 22:49:16.503 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:18 compute-0 podman[248373]: 2025-12-01 22:49:18.853514501 +0000 UTC m=+0.121822006 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:49:20 compute-0 nova_compute[189508]: 2025-12-01 22:49:20.200 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:21 compute-0 nova_compute[189508]: 2025-12-01 22:49:21.507 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:22 compute-0 podman[248394]: 2025-12-01 22:49:22.842532426 +0000 UTC m=+0.117133355 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  1 22:49:22 compute-0 podman[248393]: 2025-12-01 22:49:22.886178311 +0000 UTC m=+0.159102313 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 22:49:24 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 22:49:25 compute-0 nova_compute[189508]: 2025-12-01 22:49:25.203 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:26 compute-0 nova_compute[189508]: 2025-12-01 22:49:26.512 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:27 compute-0 podman[248437]: 2025-12-01 22:49:27.842472519 +0000 UTC m=+0.103016376 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 22:49:27 compute-0 podman[248438]: 2025-12-01 22:49:27.865732457 +0000 UTC m=+0.114006076 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 22:49:27 compute-0 podman[248440]: 2025-12-01 22:49:27.863465173 +0000 UTC m=+0.103912101 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec  1 22:49:27 compute-0 podman[248439]: 2025-12-01 22:49:27.886098423 +0000 UTC m=+0.135931627 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec  1 22:49:29 compute-0 podman[203693]: time="2025-12-01T22:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:49:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:49:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec  1 22:49:30 compute-0 nova_compute[189508]: 2025-12-01 22:49:30.207 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:31 compute-0 openstack_network_exporter[205887]: ERROR   22:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:49:31 compute-0 openstack_network_exporter[205887]: ERROR   22:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:49:31 compute-0 openstack_network_exporter[205887]: ERROR   22:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:49:31 compute-0 openstack_network_exporter[205887]: ERROR   22:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:49:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:49:31 compute-0 openstack_network_exporter[205887]: ERROR   22:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:49:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:49:31 compute-0 nova_compute[189508]: 2025-12-01 22:49:31.516 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:35 compute-0 nova_compute[189508]: 2025-12-01 22:49:35.209 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.270 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.272 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.273 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.282 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'db72b066-1974-41bb-a917-13b5ba129196', 'name': 'test_0', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1ddf530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.287 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dae82663-6de4-4397-8aab-9559ddeaec24', 'name': 'vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u', 'flavor': {'id': 'aa9783c0-34c0-4a4d-bc86-59429edc9395', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'ca09b2c0-a624-4fb0-b624-b8d92d761f4a'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'user_id': '3b810e864d6c4d058e539f62ad181096', 'hostId': '968321c069642be9d1a3fa358b5b3f63dc1f2874c8cdb32415844c3d', 'status': 'active', 'metadata': {'metering.server_group': '40d7879f-33f5-4fcb-8784-d9088730e18f'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:49:35.287926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.295 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.300 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.301 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.302 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.302 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:49:35.301894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.303 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:49:35.303412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.304 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.304 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:49:35.304747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.342 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.343 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.343 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.381 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.382 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.383 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.384 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:49:35.384436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.490 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.491 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.492 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.608 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.609 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.609 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.610 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.610 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.610 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.611 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 484161753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.611 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 126486600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.612 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.latency volume: 84264950 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:49:35.611248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.612 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 529113669 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.612 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 125664984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.612 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.latency volume: 99600138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.613 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.614 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.614 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.614 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.614 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.615 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:49:35.613471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.616 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.616 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.616 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.616 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.617 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:49:35.615994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.617 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:49:35.618496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.619 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.619 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.619 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.619 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.620 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.620 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.620 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.621 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.621 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.621 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.621 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.622 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.622 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.622 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:49:35.621370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.623 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 2925316221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.624 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 17009348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.624 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.624 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 1954219616 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.624 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 13544625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:49:35.623854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:49:35.625919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.658 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/cpu volume: 47330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.694 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/cpu volume: 40640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.695 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.695 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.696 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.696 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.697 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.698 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.698 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:49:35.695730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:49:35.696872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:49:35.697873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.700 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.701 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.701 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:49:35.700333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:49:35.701157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:49:35.702395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:49:35.703359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.704 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:49:35.704487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.705 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:49:35.705635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.706 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.707 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.707 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.707 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.707 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/memory.usage volume: 48.75390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:49:35.706706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:49:35.707901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.708 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.709 14 DEBUG ceilometer.compute.pollsters [-] db72b066-1974-41bb-a917-13b5ba129196/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.709 14 DEBUG ceilometer.compute.pollsters [-] dae82663-6de4-4397-8aab-9559ddeaec24/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:49:35.709125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:49:35.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:49:36 compute-0 nova_compute[189508]: 2025-12-01 22:49:36.519 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:40 compute-0 nova_compute[189508]: 2025-12-01 22:49:40.212 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:41 compute-0 nova_compute[189508]: 2025-12-01 22:49:41.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:41 compute-0 nova_compute[189508]: 2025-12-01 22:49:41.523 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:42 compute-0 nova_compute[189508]: 2025-12-01 22:49:42.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:44 compute-0 podman[248520]: 2025-12-01 22:49:44.857498203 +0000 UTC m=+0.125107790 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:49:45 compute-0 nova_compute[189508]: 2025-12-01 22:49:45.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:45 compute-0 nova_compute[189508]: 2025-12-01 22:49:45.215 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:46 compute-0 nova_compute[189508]: 2025-12-01 22:49:46.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:46 compute-0 nova_compute[189508]: 2025-12-01 22:49:46.527 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:46 compute-0 podman[248543]: 2025-12-01 22:49:46.859620141 +0000 UTC m=+0.118145684 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 22:49:47 compute-0 nova_compute[189508]: 2025-12-01 22:49:47.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:47 compute-0 nova_compute[189508]: 2025-12-01 22:49:47.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:49:49 compute-0 nova_compute[189508]: 2025-12-01 22:49:49.202 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:49 compute-0 nova_compute[189508]: 2025-12-01 22:49:49.203 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:49:49 compute-0 nova_compute[189508]: 2025-12-01 22:49:49.734 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:49:49 compute-0 nova_compute[189508]: 2025-12-01 22:49:49.734 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:49:49 compute-0 nova_compute[189508]: 2025-12-01 22:49:49.735 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:49:49 compute-0 podman[248564]: 2025-12-01 22:49:49.841920369 +0000 UTC m=+0.107052930 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 22:49:50 compute-0 nova_compute[189508]: 2025-12-01 22:49:50.219 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:50 compute-0 nova_compute[189508]: 2025-12-01 22:49:50.995 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updating instance_info_cache with network_info: [{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.014 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.015 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.237 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.237 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.238 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.238 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.348 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.445 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.447 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.517 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.519 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.542 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.590 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.591 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.663 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.678 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.748 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.749 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.847 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.849 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.968 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:51 compute-0 nova_compute[189508]: 2025-12-01 22:49:51.969 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.041 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.531 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.533 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4890MB free_disk=72.1509017944336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.533 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.534 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.659 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.660 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.660 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.660 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.729 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.755 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.757 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:49:52 compute-0 nova_compute[189508]: 2025-12-01 22:49:52.758 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:49:53 compute-0 podman[248608]: 2025-12-01 22:49:53.869696871 +0000 UTC m=+0.138449209 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 22:49:53 compute-0 podman[248607]: 2025-12-01 22:49:53.903632521 +0000 UTC m=+0.174237101 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  1 22:49:55 compute-0 nova_compute[189508]: 2025-12-01 22:49:55.222 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:56 compute-0 nova_compute[189508]: 2025-12-01 22:49:56.547 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:49:57 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec  1 22:49:57 compute-0 systemd[1]: session-30.scope: Consumed 5.476s CPU time.
Dec  1 22:49:57 compute-0 systemd-logind[788]: Session 30 logged out. Waiting for processes to exit.
Dec  1 22:49:57 compute-0 systemd-logind[788]: Removed session 30.
Dec  1 22:49:58 compute-0 podman[248648]: 2025-12-01 22:49:58.808210727 +0000 UTC m=+0.086808697 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:49:58 compute-0 podman[248649]: 2025-12-01 22:49:58.839480182 +0000 UTC m=+0.100733861 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:49:58 compute-0 podman[248650]: 2025-12-01 22:49:58.871183269 +0000 UTC m=+0.120547372 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, release=1755695350, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 22:49:58 compute-0 podman[248657]: 2025-12-01 22:49:58.87404978 +0000 UTC m=+0.121544420 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, distribution-scope=public, vendor=Red Hat, Inc., release=1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4)
Dec  1 22:49:59 compute-0 podman[203693]: time="2025-12-01T22:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:49:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:49:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 22:50:00 compute-0 nova_compute[189508]: 2025-12-01 22:50:00.225 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:01 compute-0 openstack_network_exporter[205887]: ERROR   22:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:50:01 compute-0 openstack_network_exporter[205887]: ERROR   22:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:50:01 compute-0 openstack_network_exporter[205887]: ERROR   22:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:50:01 compute-0 openstack_network_exporter[205887]: ERROR   22:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:50:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:50:01 compute-0 openstack_network_exporter[205887]: ERROR   22:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:50:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:50:01 compute-0 nova_compute[189508]: 2025-12-01 22:50:01.552 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:50:04.629 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:50:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:50:04.630 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:50:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:50:04.631 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:50:05 compute-0 nova_compute[189508]: 2025-12-01 22:50:05.229 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:06 compute-0 nova_compute[189508]: 2025-12-01 22:50:06.555 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:10 compute-0 nova_compute[189508]: 2025-12-01 22:50:10.233 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:11 compute-0 nova_compute[189508]: 2025-12-01 22:50:11.559 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:15 compute-0 nova_compute[189508]: 2025-12-01 22:50:15.237 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:15 compute-0 podman[248729]: 2025-12-01 22:50:15.858391834 +0000 UTC m=+0.121555301 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:50:16 compute-0 nova_compute[189508]: 2025-12-01 22:50:16.561 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:17 compute-0 podman[248751]: 2025-12-01 22:50:17.858958346 +0000 UTC m=+0.128220769 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 22:50:20 compute-0 nova_compute[189508]: 2025-12-01 22:50:20.242 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:20 compute-0 podman[248772]: 2025-12-01 22:50:20.829626486 +0000 UTC m=+0.102437239 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:50:21 compute-0 nova_compute[189508]: 2025-12-01 22:50:21.565 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:24 compute-0 podman[248794]: 2025-12-01 22:50:24.858203167 +0000 UTC m=+0.110689203 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec  1 22:50:24 compute-0 podman[248793]: 2025-12-01 22:50:24.92474998 +0000 UTC m=+0.187912208 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  1 22:50:25 compute-0 nova_compute[189508]: 2025-12-01 22:50:25.245 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:26 compute-0 nova_compute[189508]: 2025-12-01 22:50:26.568 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:29 compute-0 podman[203693]: time="2025-12-01T22:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:50:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:50:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  1 22:50:29 compute-0 podman[248835]: 2025-12-01 22:50:29.865146952 +0000 UTC m=+0.124223736 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 22:50:29 compute-0 podman[248836]: 2025-12-01 22:50:29.868476656 +0000 UTC m=+0.123516666 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:50:29 compute-0 podman[248838]: 2025-12-01 22:50:29.897854347 +0000 UTC m=+0.131095200 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release=1214.1726694543, name=ubi9, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec  1 22:50:29 compute-0 podman[248837]: 2025-12-01 22:50:29.901022567 +0000 UTC m=+0.119178353 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7)
Dec  1 22:50:30 compute-0 nova_compute[189508]: 2025-12-01 22:50:30.248 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:31 compute-0 openstack_network_exporter[205887]: ERROR   22:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:50:31 compute-0 openstack_network_exporter[205887]: ERROR   22:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:50:31 compute-0 openstack_network_exporter[205887]: ERROR   22:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:50:31 compute-0 openstack_network_exporter[205887]: ERROR   22:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:50:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:50:31 compute-0 openstack_network_exporter[205887]: ERROR   22:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:50:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:50:31 compute-0 nova_compute[189508]: 2025-12-01 22:50:31.571 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:35 compute-0 nova_compute[189508]: 2025-12-01 22:50:35.251 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:36 compute-0 nova_compute[189508]: 2025-12-01 22:50:36.576 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:40 compute-0 nova_compute[189508]: 2025-12-01 22:50:40.255 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:40 compute-0 nova_compute[189508]: 2025-12-01 22:50:40.754 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:41 compute-0 nova_compute[189508]: 2025-12-01 22:50:41.237 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:41 compute-0 nova_compute[189508]: 2025-12-01 22:50:41.581 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:44 compute-0 nova_compute[189508]: 2025-12-01 22:50:44.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:45 compute-0 nova_compute[189508]: 2025-12-01 22:50:45.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:45 compute-0 nova_compute[189508]: 2025-12-01 22:50:45.257 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:46 compute-0 nova_compute[189508]: 2025-12-01 22:50:46.586 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:46 compute-0 podman[248914]: 2025-12-01 22:50:46.801855734 +0000 UTC m=+0.072340718 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:50:47 compute-0 nova_compute[189508]: 2025-12-01 22:50:47.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:47 compute-0 nova_compute[189508]: 2025-12-01 22:50:47.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:47 compute-0 nova_compute[189508]: 2025-12-01 22:50:47.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:50:48 compute-0 podman[248938]: 2025-12-01 22:50:48.854267443 +0000 UTC m=+0.137037979 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:50:49 compute-0 nova_compute[189508]: 2025-12-01 22:50:49.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:49 compute-0 nova_compute[189508]: 2025-12-01 22:50:49.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:50:49 compute-0 nova_compute[189508]: 2025-12-01 22:50:49.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:50:49 compute-0 nova_compute[189508]: 2025-12-01 22:50:49.702 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:50:49 compute-0 nova_compute[189508]: 2025-12-01 22:50:49.703 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:50:49 compute-0 nova_compute[189508]: 2025-12-01 22:50:49.704 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:50:49 compute-0 nova_compute[189508]: 2025-12-01 22:50:49.706 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:50:50 compute-0 nova_compute[189508]: 2025-12-01 22:50:50.261 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.213 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [{"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.300 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-db72b066-1974-41bb-a917-13b5ba129196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.300 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.301 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.302 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.428 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.428 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.429 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.429 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.547 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.588 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.610 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.611 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.671 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.672 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.739 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.740 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:51 compute-0 podman[248967]: 2025-12-01 22:50:51.781350571 +0000 UTC m=+0.066541013 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.815 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.824 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.897 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.898 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.959 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:51 compute-0 nova_compute[189508]: 2025-12-01 22:50:51.960 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.024 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.025 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.090 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.449 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.451 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4904MB free_disk=72.1509017944336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.451 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.452 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.665 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance db72b066-1974-41bb-a917-13b5ba129196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.666 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance dae82663-6de4-4397-8aab-9559ddeaec24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.666 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.666 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.733 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.795 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.797 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:50:52 compute-0 nova_compute[189508]: 2025-12-01 22:50:52.798 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:50:53 compute-0 nova_compute[189508]: 2025-12-01 22:50:53.697 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:50:55 compute-0 nova_compute[189508]: 2025-12-01 22:50:55.263 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:55 compute-0 podman[249003]: 2025-12-01 22:50:55.848126714 +0000 UTC m=+0.122231270 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 22:50:55 compute-0 podman[249004]: 2025-12-01 22:50:55.859916627 +0000 UTC m=+0.115628012 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 22:50:56 compute-0 nova_compute[189508]: 2025-12-01 22:50:56.592 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:50:59 compute-0 podman[203693]: time="2025-12-01T22:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:50:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:50:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 22:51:00 compute-0 nova_compute[189508]: 2025-12-01 22:51:00.266 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:00 compute-0 podman[249046]: 2025-12-01 22:51:00.82616992 +0000 UTC m=+0.094463363 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:51:00 compute-0 podman[249048]: 2025-12-01 22:51:00.832686955 +0000 UTC m=+0.092314883 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec  1 22:51:00 compute-0 podman[249047]: 2025-12-01 22:51:00.842802981 +0000 UTC m=+0.101170833 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  1 22:51:00 compute-0 podman[249049]: 2025-12-01 22:51:00.895018308 +0000 UTC m=+0.148584254 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 22:51:01 compute-0 openstack_network_exporter[205887]: ERROR   22:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:51:01 compute-0 openstack_network_exporter[205887]: ERROR   22:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:51:01 compute-0 openstack_network_exporter[205887]: ERROR   22:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:51:01 compute-0 openstack_network_exporter[205887]: ERROR   22:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:51:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:51:01 compute-0 openstack_network_exporter[205887]: ERROR   22:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:51:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:51:01 compute-0 nova_compute[189508]: 2025-12-01 22:51:01.595 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:04.630 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:04.631 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:04.632 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:04 compute-0 nova_compute[189508]: 2025-12-01 22:51:04.990 189512 DEBUG nova.compute.manager [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received event network-changed-d4f1e6ff-9498-4994-811a-29c1f1b406a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:51:04 compute-0 nova_compute[189508]: 2025-12-01 22:51:04.991 189512 DEBUG nova.compute.manager [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Refreshing instance network info cache due to event network-changed-d4f1e6ff-9498-4994-811a-29c1f1b406a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:51:04 compute-0 nova_compute[189508]: 2025-12-01 22:51:04.991 189512 DEBUG oslo_concurrency.lockutils [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:51:04 compute-0 nova_compute[189508]: 2025-12-01 22:51:04.991 189512 DEBUG oslo_concurrency.lockutils [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:51:04 compute-0 nova_compute[189508]: 2025-12-01 22:51:04.992 189512 DEBUG nova.network.neutron [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Refreshing network info cache for port d4f1e6ff-9498-4994-811a-29c1f1b406a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.091 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.092 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.093 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.093 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.093 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.095 189512 INFO nova.compute.manager [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Terminating instance#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.096 189512 DEBUG nova.compute.manager [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:51:05 compute-0 kernel: tapd4f1e6ff-94 (unregistering): left promiscuous mode
Dec  1 22:51:05 compute-0 NetworkManager[56278]: <info>  [1764629465.1420] device (tapd4f1e6ff-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:51:05 compute-0 ovn_controller[97770]: 2025-12-01T22:51:05Z|00058|binding|INFO|Releasing lport d4f1e6ff-9498-4994-811a-29c1f1b406a3 from this chassis (sb_readonly=0)
Dec  1 22:51:05 compute-0 ovn_controller[97770]: 2025-12-01T22:51:05Z|00059|binding|INFO|Setting lport d4f1e6ff-9498-4994-811a-29c1f1b406a3 down in Southbound
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.154 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 ovn_controller[97770]: 2025-12-01T22:51:05Z|00060|binding|INFO|Removing iface tapd4f1e6ff-94 ovn-installed in OVS
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.159 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.169 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:f6:49 192.168.0.51'], port_security=['fa:16:3e:a3:f6:49 192.168.0.51'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-37pfkxggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-port-gnvnsxaqfbgg', 'neutron:cidrs': '192.168.0.51/24', 'neutron:device_id': 'dae82663-6de4-4397-8aab-9559ddeaec24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-37pfkxggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-port-gnvnsxaqfbgg', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=d4f1e6ff-9498-4994-811a-29c1f1b406a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.170 106662 INFO neutron.agent.ovn.metadata.agent [-] Port d4f1e6ff-9498-4994-811a-29c1f1b406a3 in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c unbound from our chassis#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.171 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.174 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.193 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[71355c2e-e079-4f10-aa78-b03ff066cb88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:05 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  1 22:51:05 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 8.834s CPU time.
Dec  1 22:51:05 compute-0 systemd-machined[155759]: Machine qemu-4-instance-00000004 terminated.
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.229 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[e1694bd8-3572-4964-8592-006b08f5697d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.232 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[2f683250-8bff-47b2-b7c8-ee6a6245c91d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.268 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.268 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[eecdd0e5-9c4c-49c1-8362-16badc1674b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.289 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c142714b-8144-43f7-8028-d77f931498e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd6e3c27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:b1:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384760, 'reachable_time': 23712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249145, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.307 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[48bf9d63-1b28-4c07-a61c-7ecb34cc34a1]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384779, 'tstamp': 384779}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249146, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdd6e3c27-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384784, 'tstamp': 384784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249146, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.310 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.313 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.314 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.316 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.324 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.325 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd6e3c27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.326 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.328 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd6e3c27-10, col_values=(('external_ids', {'iface-id': 'e303b09b-4673-4950-aa2d-91085a5bc5f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.329 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:51:05 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:05.331 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.401 189512 INFO nova.virt.libvirt.driver [-] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Instance destroyed successfully.#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.402 189512 DEBUG nova.objects.instance [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'resources' on Instance uuid dae82663-6de4-4397-8aab-9559ddeaec24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.415 189512 DEBUG nova.virt.libvirt.vif [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:40:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-xggku2d-6zkr5wlfztfw-ynr4fgxtxwgu-vnf-ehiyohdldm5u',id=4,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:40:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='40d7879f-33f5-4fcb-8784-d9088730e18f'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-qucg0bnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:40:57Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDUzMzI1ODk2MzEwMzYxNjE1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 22:51:05 compute-0 nova_compute[189508]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDUzMzI1ODk2MzEwMzYxNjE1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA1MzMyNTg5NjMxMDM2MTYxNTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNTMzMjU4OTYzMTAzNjE2MTU4PT0tLQo=',user_id='3b810e864d6c4d058e539f62ad181096',uuid=dae82663-6de4-4397-8aab-9559ddeaec24,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.416 189512 DEBUG nova.network.os_vif_util [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.416 189512 DEBUG nova.network.os_vif_util [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:f6:49,bridge_name='br-int',has_traffic_filtering=True,id=d4f1e6ff-9498-4994-811a-29c1f1b406a3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd4f1e6ff-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.417 189512 DEBUG os_vif [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:f6:49,bridge_name='br-int',has_traffic_filtering=True,id=d4f1e6ff-9498-4994-811a-29c1f1b406a3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd4f1e6ff-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.420 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.421 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4f1e6ff-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.424 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.426 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.430 189512 INFO os_vif [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:f6:49,bridge_name='br-int',has_traffic_filtering=True,id=d4f1e6ff-9498-4994-811a-29c1f1b406a3,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd4f1e6ff-94')#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.431 189512 INFO nova.virt.libvirt.driver [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Deleting instance files /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24_del#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.433 189512 INFO nova.virt.libvirt.driver [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Deletion of /var/lib/nova/instances/dae82663-6de4-4397-8aab-9559ddeaec24_del complete#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.499 189512 INFO nova.compute.manager [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.501 189512 DEBUG oslo.service.loopingcall [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.502 189512 DEBUG nova.compute.manager [-] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.502 189512 DEBUG nova.network.neutron [-] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.581 189512 DEBUG nova.compute.manager [req-b628380c-afbd-49ca-9da8-f8bebd87e521 req-817b8317-80b4-48fb-b0fa-318d366ad156 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received event network-vif-unplugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.582 189512 DEBUG oslo_concurrency.lockutils [req-b628380c-afbd-49ca-9da8-f8bebd87e521 req-817b8317-80b4-48fb-b0fa-318d366ad156 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.583 189512 DEBUG oslo_concurrency.lockutils [req-b628380c-afbd-49ca-9da8-f8bebd87e521 req-817b8317-80b4-48fb-b0fa-318d366ad156 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.584 189512 DEBUG oslo_concurrency.lockutils [req-b628380c-afbd-49ca-9da8-f8bebd87e521 req-817b8317-80b4-48fb-b0fa-318d366ad156 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.585 189512 DEBUG nova.compute.manager [req-b628380c-afbd-49ca-9da8-f8bebd87e521 req-817b8317-80b4-48fb-b0fa-318d366ad156 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] No waiting events found dispatching network-vif-unplugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:51:05 compute-0 rsyslogd[236992]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 22:51:05.415 189512 DEBUG nova.virt.libvirt.vif [None req-cb72a95f-64 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 22:51:05 compute-0 nova_compute[189508]: 2025-12-01 22:51:05.586 189512 DEBUG nova.compute.manager [req-b628380c-afbd-49ca-9da8-f8bebd87e521 req-817b8317-80b4-48fb-b0fa-318d366ad156 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received event network-vif-unplugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.200 189512 DEBUG nova.network.neutron [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updated VIF entry in instance network info cache for port d4f1e6ff-9498-4994-811a-29c1f1b406a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.201 189512 DEBUG nova.network.neutron [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updating instance_info_cache with network_info: [{"id": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "address": "fa:16:3e:a3:f6:49", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4f1e6ff-94", "ovs_interfaceid": "d4f1e6ff-9498-4994-811a-29c1f1b406a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.235 189512 DEBUG oslo_concurrency.lockutils [req-6dc264ff-cbd7-420f-985e-65fa0b31b51e req-459222a0-4e57-42f3-a1db-27626585cf39 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-dae82663-6de4-4397-8aab-9559ddeaec24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.805 189512 DEBUG nova.network.neutron [-] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.822 189512 INFO nova.compute.manager [-] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Took 1.32 seconds to deallocate network for instance.#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.863 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.864 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.951 189512 DEBUG nova.compute.provider_tree [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.970 189512 DEBUG nova.scheduler.client.report [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:51:06 compute-0 nova_compute[189508]: 2025-12-01 22:51:06.991 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.035 189512 INFO nova.scheduler.client.report [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Deleted allocations for instance dae82663-6de4-4397-8aab-9559ddeaec24#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.115 189512 DEBUG oslo_concurrency.lockutils [None req-cb72a95f-6415-4fde-b29e-9e34c3e08eaa 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.697 189512 DEBUG nova.compute.manager [req-f821bfad-317a-4de3-8573-7d2673e7a964 req-79ea5926-8af4-436a-8daf-121f8b509d52 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received event network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.698 189512 DEBUG oslo_concurrency.lockutils [req-f821bfad-317a-4de3-8573-7d2673e7a964 req-79ea5926-8af4-436a-8daf-121f8b509d52 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.700 189512 DEBUG oslo_concurrency.lockutils [req-f821bfad-317a-4de3-8573-7d2673e7a964 req-79ea5926-8af4-436a-8daf-121f8b509d52 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.700 189512 DEBUG oslo_concurrency.lockutils [req-f821bfad-317a-4de3-8573-7d2673e7a964 req-79ea5926-8af4-436a-8daf-121f8b509d52 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "dae82663-6de4-4397-8aab-9559ddeaec24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.701 189512 DEBUG nova.compute.manager [req-f821bfad-317a-4de3-8573-7d2673e7a964 req-79ea5926-8af4-436a-8daf-121f8b509d52 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] No waiting events found dispatching network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:51:07 compute-0 nova_compute[189508]: 2025-12-01 22:51:07.701 189512 WARNING nova.compute.manager [req-f821bfad-317a-4de3-8573-7d2673e7a964 req-79ea5926-8af4-436a-8daf-121f8b509d52 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Received unexpected event network-vif-plugged-d4f1e6ff-9498-4994-811a-29c1f1b406a3 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 22:51:10 compute-0 nova_compute[189508]: 2025-12-01 22:51:10.273 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:10 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:10.334 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:51:10 compute-0 nova_compute[189508]: 2025-12-01 22:51:10.425 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:15 compute-0 nova_compute[189508]: 2025-12-01 22:51:15.276 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:15 compute-0 nova_compute[189508]: 2025-12-01 22:51:15.429 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:17 compute-0 podman[249169]: 2025-12-01 22:51:17.854363868 +0000 UTC m=+0.113994046 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:51:19 compute-0 podman[249195]: 2025-12-01 22:51:19.857191947 +0000 UTC m=+0.129762652 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 22:51:20 compute-0 nova_compute[189508]: 2025-12-01 22:51:20.279 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:20 compute-0 nova_compute[189508]: 2025-12-01 22:51:20.399 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629465.3968687, dae82663-6de4-4397-8aab-9559ddeaec24 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:51:20 compute-0 nova_compute[189508]: 2025-12-01 22:51:20.400 189512 INFO nova.compute.manager [-] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:51:20 compute-0 nova_compute[189508]: 2025-12-01 22:51:20.431 189512 DEBUG nova.compute.manager [None req-c6819595-a014-41a3-9d37-5cf8d07c4ff1 - - - - - -] [instance: dae82663-6de4-4397-8aab-9559ddeaec24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:51:20 compute-0 nova_compute[189508]: 2025-12-01 22:51:20.432 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:22 compute-0 podman[249218]: 2025-12-01 22:51:22.834676302 +0000 UTC m=+0.113897554 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible)
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.306 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.308 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.308 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.309 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.309 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.311 189512 INFO nova.compute.manager [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Terminating instance#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.312 189512 DEBUG nova.compute.manager [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:51:24 compute-0 kernel: tap64f1c8ea-4a (unregistering): left promiscuous mode
Dec  1 22:51:24 compute-0 NetworkManager[56278]: <info>  [1764629484.3558] device (tap64f1c8ea-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.373 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:24 compute-0 ovn_controller[97770]: 2025-12-01T22:51:24Z|00061|binding|INFO|Releasing lport 64f1c8ea-4ab7-4266-8a8c-466433068355 from this chassis (sb_readonly=0)
Dec  1 22:51:24 compute-0 ovn_controller[97770]: 2025-12-01T22:51:24Z|00062|binding|INFO|Setting lport 64f1c8ea-4ab7-4266-8a8c-466433068355 down in Southbound
Dec  1 22:51:24 compute-0 ovn_controller[97770]: 2025-12-01T22:51:24Z|00063|binding|INFO|Removing iface tap64f1c8ea-4a ovn-installed in OVS
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.383 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.409 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:3f:bd 192.168.0.177'], port_security=['fa:16:3e:78:3f:bd 192.168.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.177/24', 'neutron:device_id': 'db72b066-1974-41bb-a917-13b5ba129196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af2fbf0e1b5f40c19aed69d241db7727', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a56d0f98-60b7-42d6-a9fa-4c77301b81c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8157a1f-e2f4-4050-ab6e-a95d2880ddbb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=64f1c8ea-4ab7-4266-8a8c-466433068355) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.411 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 64f1c8ea-4ab7-4266-8a8c-466433068355 in datapath dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c unbound from our chassis#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.412 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.413 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.414 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb48f2-e652-496f-be2b-532971c44a1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.415 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c namespace which is not needed anymore#033[00m
Dec  1 22:51:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  1 22:51:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 31.839s CPU time.
Dec  1 22:51:24 compute-0 systemd-machined[155759]: Machine qemu-1-instance-00000001 terminated.
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.630 189512 INFO nova.virt.libvirt.driver [-] [instance: db72b066-1974-41bb-a917-13b5ba129196] Instance destroyed successfully.#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.631 189512 DEBUG nova.objects.instance [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lazy-loading 'resources' on Instance uuid db72b066-1974-41bb-a917-13b5ba129196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:51:24 compute-0 neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c[240102]: [NOTICE]   (240106) : haproxy version is 2.8.14-c23fe91
Dec  1 22:51:24 compute-0 neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c[240102]: [NOTICE]   (240106) : path to executable is /usr/sbin/haproxy
Dec  1 22:51:24 compute-0 neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c[240102]: [WARNING]  (240106) : Exiting Master process...
Dec  1 22:51:24 compute-0 neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c[240102]: [ALERT]    (240106) : Current worker (240108) exited with code 143 (Terminated)
Dec  1 22:51:24 compute-0 neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c[240102]: [WARNING]  (240106) : All workers exited. Exiting... (0)
Dec  1 22:51:24 compute-0 systemd[1]: libpod-ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7.scope: Deactivated successfully.
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.647 189512 DEBUG nova.virt.libvirt.vif [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:31:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:32:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af2fbf0e1b5f40c19aed69d241db7727',ramdisk_id='',reservation_id='r-efoc96je',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='ca09b2c0-a624-4fb0-b624-b8d92d761f4a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:32:15Z,user_data=None,user_id='3b810e864d6c4d058e539f62ad181096',uuid=db72b066-1974-41bb-a917-13b5ba129196,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.648 189512 DEBUG nova.network.os_vif_util [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converting VIF {"id": "64f1c8ea-4ab7-4266-8a8c-466433068355", "address": "fa:16:3e:78:3f:bd", "network": {"id": "dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2fbf0e1b5f40c19aed69d241db7727", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64f1c8ea-4a", "ovs_interfaceid": "64f1c8ea-4ab7-4266-8a8c-466433068355", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:51:24 compute-0 podman[249261]: 2025-12-01 22:51:24.650574129 +0000 UTC m=+0.081087905 container died ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.650 189512 DEBUG nova.network.os_vif_util [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:3f:bd,bridge_name='br-int',has_traffic_filtering=True,id=64f1c8ea-4ab7-4266-8a8c-466433068355,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64f1c8ea-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.651 189512 DEBUG os_vif [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:3f:bd,bridge_name='br-int',has_traffic_filtering=True,id=64f1c8ea-4ab7-4266-8a8c-466433068355,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64f1c8ea-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.653 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.653 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f1c8ea-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.658 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.661 189512 INFO os_vif [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:3f:bd,bridge_name='br-int',has_traffic_filtering=True,id=64f1c8ea-4ab7-4266-8a8c-466433068355,network=Network(dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64f1c8ea-4a')#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.662 189512 INFO nova.virt.libvirt.driver [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Deleting instance files /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196_del#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.663 189512 INFO nova.virt.libvirt.driver [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Deletion of /var/lib/nova/instances/db72b066-1974-41bb-a917-13b5ba129196_del complete#033[00m
Dec  1 22:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7-userdata-shm.mount: Deactivated successfully.
Dec  1 22:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-272cfdd874201b1817bf0494d025abaa5502e68a4188167a8eaf3d4514d1c75b-merged.mount: Deactivated successfully.
Dec  1 22:51:24 compute-0 podman[249261]: 2025-12-01 22:51:24.715104435 +0000 UTC m=+0.145618171 container cleanup ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.740 189512 INFO nova.compute.manager [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.741 189512 DEBUG oslo.service.loopingcall [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.743 189512 DEBUG nova.compute.manager [-] [instance: db72b066-1974-41bb-a917-13b5ba129196] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.743 189512 DEBUG nova.network.neutron [-] [instance: db72b066-1974-41bb-a917-13b5ba129196] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:51:24 compute-0 systemd[1]: libpod-conmon-ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7.scope: Deactivated successfully.
Dec  1 22:51:24 compute-0 podman[249307]: 2025-12-01 22:51:24.810025271 +0000 UTC m=+0.060834123 container remove ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.825 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[2f28cb90-f28d-4bb0-945e-b3c93dc119d9]: (4, ('Mon Dec  1 10:51:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c (ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7)\nff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7\nMon Dec  1 10:51:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c (ff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7)\nff95b80f6a41a89e49021ae980ba0d2dc0b5f94b4fb3698555ead20fe655e4e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.827 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c9cdc2-ebbb-4a53-8241-2726d57ae046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.828 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd6e3c27-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.829 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:24 compute-0 kernel: tapdd6e3c27-10: left promiscuous mode
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.847 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.850 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f65413-f460-49cb-9d67-6cd7638db12f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.869 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[6af019ca-db03-415e-8e3d-aae3c4cd6ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.870 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[30095692-b307-4d25-b7fc-9129b2e87217]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.894 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[802c78a4-b514-43db-aed2-1e3d8e04a5cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384747, 'reachable_time': 16630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249318, 'error': None, 'target': 'ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 systemd[1]: run-netns-ovnmeta\x2ddd6e3c27\x2d1d39\x2d4a6a\x2db1c1\x2da9ad7df7618c.mount: Deactivated successfully.
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.911 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dd6e3c27-1d39-4a6a-b1c1-a9ad7df7618c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:51:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:51:24.912 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[6645f749-a91e-4c5b-ae3a-266b22fffd1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.926 189512 DEBUG nova.compute.manager [req-ccfe7146-396f-409e-a2d7-bb7cf49c9a52 req-4e8fefd0-f2f5-489f-a7dd-334d38715390 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-vif-unplugged-64f1c8ea-4ab7-4266-8a8c-466433068355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.927 189512 DEBUG oslo_concurrency.lockutils [req-ccfe7146-396f-409e-a2d7-bb7cf49c9a52 req-4e8fefd0-f2f5-489f-a7dd-334d38715390 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.927 189512 DEBUG oslo_concurrency.lockutils [req-ccfe7146-396f-409e-a2d7-bb7cf49c9a52 req-4e8fefd0-f2f5-489f-a7dd-334d38715390 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.927 189512 DEBUG oslo_concurrency.lockutils [req-ccfe7146-396f-409e-a2d7-bb7cf49c9a52 req-4e8fefd0-f2f5-489f-a7dd-334d38715390 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.928 189512 DEBUG nova.compute.manager [req-ccfe7146-396f-409e-a2d7-bb7cf49c9a52 req-4e8fefd0-f2f5-489f-a7dd-334d38715390 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] No waiting events found dispatching network-vif-unplugged-64f1c8ea-4ab7-4266-8a8c-466433068355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:51:24 compute-0 nova_compute[189508]: 2025-12-01 22:51:24.928 189512 DEBUG nova.compute.manager [req-ccfe7146-396f-409e-a2d7-bb7cf49c9a52 req-4e8fefd0-f2f5-489f-a7dd-334d38715390 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-vif-unplugged-64f1c8ea-4ab7-4266-8a8c-466433068355 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.281 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.585 189512 DEBUG nova.network.neutron [-] [instance: db72b066-1974-41bb-a917-13b5ba129196] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.605 189512 INFO nova.compute.manager [-] [instance: db72b066-1974-41bb-a917-13b5ba129196] Took 0.86 seconds to deallocate network for instance.#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.656 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.657 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.665 189512 DEBUG nova.compute.manager [req-f3a07685-588a-4d6a-8fbc-4a96b092d32c req-44d5cfe3-5e3b-47a7-b337-ceb1b1ee409f c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-vif-deleted-64f1c8ea-4ab7-4266-8a8c-466433068355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.741 189512 DEBUG nova.compute.provider_tree [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.763 189512 DEBUG nova.scheduler.client.report [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.811 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.841 189512 INFO nova.scheduler.client.report [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Deleted allocations for instance db72b066-1974-41bb-a917-13b5ba129196#033[00m
Dec  1 22:51:25 compute-0 nova_compute[189508]: 2025-12-01 22:51:25.941 189512 DEBUG oslo_concurrency.lockutils [None req-d788951b-2a16-4804-8acc-c9b7a0b5e55e 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:26 compute-0 podman[249322]: 2025-12-01 22:51:26.845487491 +0000 UTC m=+0.101830162 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:51:26 compute-0 podman[249321]: 2025-12-01 22:51:26.91896067 +0000 UTC m=+0.183046020 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 22:51:27 compute-0 nova_compute[189508]: 2025-12-01 22:51:27.024 189512 DEBUG nova.compute.manager [req-ecd196e3-5e2c-40cc-929a-f2dc6eee23ab req-8d9806f7-8071-47d9-9efc-fe4c82152d79 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received event network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:51:27 compute-0 nova_compute[189508]: 2025-12-01 22:51:27.025 189512 DEBUG oslo_concurrency.lockutils [req-ecd196e3-5e2c-40cc-929a-f2dc6eee23ab req-8d9806f7-8071-47d9-9efc-fe4c82152d79 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "db72b066-1974-41bb-a917-13b5ba129196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:27 compute-0 nova_compute[189508]: 2025-12-01 22:51:27.025 189512 DEBUG oslo_concurrency.lockutils [req-ecd196e3-5e2c-40cc-929a-f2dc6eee23ab req-8d9806f7-8071-47d9-9efc-fe4c82152d79 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:27 compute-0 nova_compute[189508]: 2025-12-01 22:51:27.025 189512 DEBUG oslo_concurrency.lockutils [req-ecd196e3-5e2c-40cc-929a-f2dc6eee23ab req-8d9806f7-8071-47d9-9efc-fe4c82152d79 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "db72b066-1974-41bb-a917-13b5ba129196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:27 compute-0 nova_compute[189508]: 2025-12-01 22:51:27.026 189512 DEBUG nova.compute.manager [req-ecd196e3-5e2c-40cc-929a-f2dc6eee23ab req-8d9806f7-8071-47d9-9efc-fe4c82152d79 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] No waiting events found dispatching network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:51:27 compute-0 nova_compute[189508]: 2025-12-01 22:51:27.026 189512 WARNING nova.compute.manager [req-ecd196e3-5e2c-40cc-929a-f2dc6eee23ab req-8d9806f7-8071-47d9-9efc-fe4c82152d79 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: db72b066-1974-41bb-a917-13b5ba129196] Received unexpected event network-vif-plugged-64f1c8ea-4ab7-4266-8a8c-466433068355 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 22:51:29 compute-0 nova_compute[189508]: 2025-12-01 22:51:29.657 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:29 compute-0 podman[203693]: time="2025-12-01T22:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:51:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:51:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Dec  1 22:51:30 compute-0 nova_compute[189508]: 2025-12-01 22:51:30.285 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:31 compute-0 openstack_network_exporter[205887]: ERROR   22:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:51:31 compute-0 openstack_network_exporter[205887]: ERROR   22:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:51:31 compute-0 openstack_network_exporter[205887]: ERROR   22:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:51:31 compute-0 openstack_network_exporter[205887]: ERROR   22:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:51:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:51:31 compute-0 openstack_network_exporter[205887]: ERROR   22:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:51:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:51:31 compute-0 podman[249368]: 2025-12-01 22:51:31.87004251 +0000 UTC m=+0.112213536 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 22:51:31 compute-0 podman[249365]: 2025-12-01 22:51:31.883942044 +0000 UTC m=+0.148073041 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:51:31 compute-0 podman[249366]: 2025-12-01 22:51:31.911792522 +0000 UTC m=+0.168199380 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:51:31 compute-0 podman[249367]: 2025-12-01 22:51:31.918075749 +0000 UTC m=+0.166805250 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc.)
Dec  1 22:51:34 compute-0 nova_compute[189508]: 2025-12-01 22:51:34.661 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.271 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.272 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.272 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.275 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c525b440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:51:35 compute-0 nova_compute[189508]: 2025-12-01 22:51:35.287 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.288 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.289 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.290 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.291 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.292 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:51:35.293 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:51:39 compute-0 nova_compute[189508]: 2025-12-01 22:51:39.627 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629484.6240046, db72b066-1974-41bb-a917-13b5ba129196 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:51:39 compute-0 nova_compute[189508]: 2025-12-01 22:51:39.628 189512 INFO nova.compute.manager [-] [instance: db72b066-1974-41bb-a917-13b5ba129196] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:51:39 compute-0 nova_compute[189508]: 2025-12-01 22:51:39.664 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:39 compute-0 nova_compute[189508]: 2025-12-01 22:51:39.667 189512 DEBUG nova.compute.manager [None req-bcef69ec-7b99-45fe-9e44-02efdc621822 - - - - - -] [instance: db72b066-1974-41bb-a917-13b5ba129196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:51:40 compute-0 nova_compute[189508]: 2025-12-01 22:51:40.292 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:41 compute-0 nova_compute[189508]: 2025-12-01 22:51:41.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:44 compute-0 nova_compute[189508]: 2025-12-01 22:51:44.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:44 compute-0 nova_compute[189508]: 2025-12-01 22:51:44.668 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:45 compute-0 nova_compute[189508]: 2025-12-01 22:51:45.295 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:47 compute-0 nova_compute[189508]: 2025-12-01 22:51:47.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:48 compute-0 podman[249450]: 2025-12-01 22:51:48.849827414 +0000 UTC m=+0.106397281 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:51:49 compute-0 nova_compute[189508]: 2025-12-01 22:51:49.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:49 compute-0 nova_compute[189508]: 2025-12-01 22:51:49.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:51:49 compute-0 nova_compute[189508]: 2025-12-01 22:51:49.223 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:51:49 compute-0 nova_compute[189508]: 2025-12-01 22:51:49.224 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:49 compute-0 nova_compute[189508]: 2025-12-01 22:51:49.225 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:49 compute-0 nova_compute[189508]: 2025-12-01 22:51:49.225 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:51:49 compute-0 nova_compute[189508]: 2025-12-01 22:51:49.673 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:50 compute-0 nova_compute[189508]: 2025-12-01 22:51:50.300 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:50 compute-0 podman[249475]: 2025-12-01 22:51:50.875488826 +0000 UTC m=+0.153239337 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 22:51:52 compute-0 nova_compute[189508]: 2025-12-01 22:51:52.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.380 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.381 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.381 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.382 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:51:53 compute-0 podman[249495]: 2025-12-01 22:51:53.559728561 +0000 UTC m=+0.115141789 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.726 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.727 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5385MB free_disk=72.19720077514648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.728 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.728 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.799 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.800 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.824 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.843 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.862 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:51:53 compute-0 nova_compute[189508]: 2025-12-01 22:51:53.862 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:51:54 compute-0 nova_compute[189508]: 2025-12-01 22:51:54.676 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:55 compute-0 nova_compute[189508]: 2025-12-01 22:51:55.301 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:55 compute-0 ovn_controller[97770]: 2025-12-01T22:51:55Z|00064|memory_trim|INFO|Detected inactivity (last active 30026 ms ago): trimming memory
Dec  1 22:51:57 compute-0 podman[249516]: 2025-12-01 22:51:57.83771139 +0000 UTC m=+0.104954650 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:51:57 compute-0 podman[249515]: 2025-12-01 22:51:57.919784172 +0000 UTC m=+0.186980921 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  1 22:51:59 compute-0 nova_compute[189508]: 2025-12-01 22:51:59.681 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:51:59 compute-0 podman[203693]: time="2025-12-01T22:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:51:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:51:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4333 "" "Go-http-client/1.1"
Dec  1 22:52:00 compute-0 nova_compute[189508]: 2025-12-01 22:52:00.305 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:01 compute-0 openstack_network_exporter[205887]: ERROR   22:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:52:01 compute-0 openstack_network_exporter[205887]: ERROR   22:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:52:01 compute-0 openstack_network_exporter[205887]: ERROR   22:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:52:01 compute-0 openstack_network_exporter[205887]: ERROR   22:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:52:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:52:01 compute-0 openstack_network_exporter[205887]: ERROR   22:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:52:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:52:02 compute-0 podman[249559]: 2025-12-01 22:52:02.825205353 +0000 UTC m=+0.098657703 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:52:02 compute-0 podman[249562]: 2025-12-01 22:52:02.869724302 +0000 UTC m=+0.116709013 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, version=9.4, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec  1 22:52:02 compute-0 podman[249561]: 2025-12-01 22:52:02.869794134 +0000 UTC m=+0.122147377 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc.)
Dec  1 22:52:02 compute-0 podman[249560]: 2025-12-01 22:52:02.881009072 +0000 UTC m=+0.141394062 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  1 22:52:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:52:04.631 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:52:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:52:04.632 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:52:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:52:04.632 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:52:04 compute-0 nova_compute[189508]: 2025-12-01 22:52:04.684 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:05 compute-0 nova_compute[189508]: 2025-12-01 22:52:05.307 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:09 compute-0 nova_compute[189508]: 2025-12-01 22:52:09.688 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:10 compute-0 nova_compute[189508]: 2025-12-01 22:52:10.311 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:14 compute-0 nova_compute[189508]: 2025-12-01 22:52:14.691 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:15 compute-0 nova_compute[189508]: 2025-12-01 22:52:15.315 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:19 compute-0 nova_compute[189508]: 2025-12-01 22:52:19.694 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:19 compute-0 podman[249639]: 2025-12-01 22:52:19.815711495 +0000 UTC m=+0.093280853 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:52:20 compute-0 nova_compute[189508]: 2025-12-01 22:52:20.317 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:21 compute-0 podman[249663]: 2025-12-01 22:52:21.838880595 +0000 UTC m=+0.121008465 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 22:52:23 compute-0 podman[249683]: 2025-12-01 22:52:23.839847242 +0000 UTC m=+0.118922927 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 22:52:24 compute-0 nova_compute[189508]: 2025-12-01 22:52:24.698 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:25 compute-0 nova_compute[189508]: 2025-12-01 22:52:25.321 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:28 compute-0 podman[249703]: 2025-12-01 22:52:28.855250736 +0000 UTC m=+0.121037046 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:52:28 compute-0 podman[249702]: 2025-12-01 22:52:28.903482647 +0000 UTC m=+0.165217833 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 22:52:29 compute-0 nova_compute[189508]: 2025-12-01 22:52:29.702 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:29 compute-0 podman[203693]: time="2025-12-01T22:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:52:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:52:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4330 "" "Go-http-client/1.1"
Dec  1 22:52:30 compute-0 nova_compute[189508]: 2025-12-01 22:52:30.324 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:31 compute-0 openstack_network_exporter[205887]: ERROR   22:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:52:31 compute-0 openstack_network_exporter[205887]: ERROR   22:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:52:31 compute-0 openstack_network_exporter[205887]: ERROR   22:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:52:31 compute-0 openstack_network_exporter[205887]: ERROR   22:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:52:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:52:31 compute-0 openstack_network_exporter[205887]: ERROR   22:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:52:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:52:33 compute-0 podman[249746]: 2025-12-01 22:52:33.863556851 +0000 UTC m=+0.108756919 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec  1 22:52:33 compute-0 podman[249745]: 2025-12-01 22:52:33.869825298 +0000 UTC m=+0.130737289 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:52:33 compute-0 podman[249744]: 2025-12-01 22:52:33.880073568 +0000 UTC m=+0.146726471 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:52:33 compute-0 podman[249750]: 2025-12-01 22:52:33.882132336 +0000 UTC m=+0.121464739 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  1 22:52:34 compute-0 nova_compute[189508]: 2025-12-01 22:52:34.705 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:35 compute-0 nova_compute[189508]: 2025-12-01 22:52:35.327 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:39 compute-0 nova_compute[189508]: 2025-12-01 22:52:39.707 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:40 compute-0 nova_compute[189508]: 2025-12-01 22:52:40.331 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:42 compute-0 nova_compute[189508]: 2025-12-01 22:52:42.858 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:42 compute-0 nova_compute[189508]: 2025-12-01 22:52:42.858 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:44 compute-0 nova_compute[189508]: 2025-12-01 22:52:44.711 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:45 compute-0 nova_compute[189508]: 2025-12-01 22:52:45.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:45 compute-0 nova_compute[189508]: 2025-12-01 22:52:45.334 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:48 compute-0 nova_compute[189508]: 2025-12-01 22:52:48.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:49 compute-0 nova_compute[189508]: 2025-12-01 22:52:49.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:49 compute-0 nova_compute[189508]: 2025-12-01 22:52:49.713 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:50 compute-0 nova_compute[189508]: 2025-12-01 22:52:50.237 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:50 compute-0 nova_compute[189508]: 2025-12-01 22:52:50.237 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:52:50 compute-0 nova_compute[189508]: 2025-12-01 22:52:50.238 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:52:50 compute-0 nova_compute[189508]: 2025-12-01 22:52:50.268 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:52:50 compute-0 nova_compute[189508]: 2025-12-01 22:52:50.337 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:50 compute-0 podman[249826]: 2025-12-01 22:52:50.841079966 +0000 UTC m=+0.112373001 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:52:51 compute-0 nova_compute[189508]: 2025-12-01 22:52:51.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:51 compute-0 nova_compute[189508]: 2025-12-01 22:52:51.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:51 compute-0 nova_compute[189508]: 2025-12-01 22:52:51.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:52:52 compute-0 podman[249850]: 2025-12-01 22:52:52.854195255 +0000 UTC m=+0.121708785 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  1 22:52:53 compute-0 nova_compute[189508]: 2025-12-01 22:52:53.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.247 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.248 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.248 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.248 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.716 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.853 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.854 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5389MB free_disk=72.19720077514648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.854 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:52:54 compute-0 nova_compute[189508]: 2025-12-01 22:52:54.855 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:52:54 compute-0 podman[249869]: 2025-12-01 22:52:54.892833653 +0000 UTC m=+0.154767288 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.228 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.229 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.342 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.374 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.812 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.813 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.833 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.856 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.883 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.900 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.901 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:52:55 compute-0 nova_compute[189508]: 2025-12-01 22:52:55.901 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:52:56 compute-0 nova_compute[189508]: 2025-12-01 22:52:56.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:56 compute-0 nova_compute[189508]: 2025-12-01 22:52:56.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 22:52:56 compute-0 nova_compute[189508]: 2025-12-01 22:52:56.220 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 22:52:58 compute-0 nova_compute[189508]: 2025-12-01 22:52:58.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:52:58 compute-0 nova_compute[189508]: 2025-12-01 22:52:58.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 22:52:59 compute-0 nova_compute[189508]: 2025-12-01 22:52:59.719 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:52:59 compute-0 podman[203693]: time="2025-12-01T22:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:52:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:52:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec  1 22:52:59 compute-0 podman[249890]: 2025-12-01 22:52:59.839710795 +0000 UTC m=+0.101637109 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:52:59 compute-0 podman[249889]: 2025-12-01 22:52:59.892839834 +0000 UTC m=+0.158784721 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:53:00 compute-0 nova_compute[189508]: 2025-12-01 22:53:00.346 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:01 compute-0 openstack_network_exporter[205887]: ERROR   22:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:53:01 compute-0 openstack_network_exporter[205887]: ERROR   22:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:53:01 compute-0 openstack_network_exporter[205887]: ERROR   22:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:53:01 compute-0 openstack_network_exporter[205887]: ERROR   22:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:53:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:53:01 compute-0 openstack_network_exporter[205887]: ERROR   22:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:53:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:53:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:53:04.634 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:53:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:53:04.634 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:53:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:53:04.635 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:53:04 compute-0 nova_compute[189508]: 2025-12-01 22:53:04.721 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:04 compute-0 podman[249934]: 2025-12-01 22:53:04.846481465 +0000 UTC m=+0.115560121 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:53:04 compute-0 podman[249933]: 2025-12-01 22:53:04.853761341 +0000 UTC m=+0.129267448 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:53:04 compute-0 podman[249941]: 2025-12-01 22:53:04.860985245 +0000 UTC m=+0.110380576 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public)
Dec  1 22:53:04 compute-0 podman[249935]: 2025-12-01 22:53:04.868215449 +0000 UTC m=+0.121306314 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=)
Dec  1 22:53:05 compute-0 nova_compute[189508]: 2025-12-01 22:53:05.348 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:09 compute-0 nova_compute[189508]: 2025-12-01 22:53:09.074 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:09 compute-0 nova_compute[189508]: 2025-12-01 22:53:09.723 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:10 compute-0 nova_compute[189508]: 2025-12-01 22:53:10.350 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:14 compute-0 nova_compute[189508]: 2025-12-01 22:53:14.726 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:15 compute-0 nova_compute[189508]: 2025-12-01 22:53:15.353 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:18 compute-0 nova_compute[189508]: 2025-12-01 22:53:18.023 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:19 compute-0 nova_compute[189508]: 2025-12-01 22:53:19.731 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:20 compute-0 nova_compute[189508]: 2025-12-01 22:53:20.356 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:21 compute-0 podman[250014]: 2025-12-01 22:53:21.809961746 +0000 UTC m=+0.086326486 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:53:23 compute-0 podman[250037]: 2025-12-01 22:53:23.849548072 +0000 UTC m=+0.121305534 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  1 22:53:24 compute-0 nova_compute[189508]: 2025-12-01 22:53:24.735 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:25 compute-0 nova_compute[189508]: 2025-12-01 22:53:25.359 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:25 compute-0 podman[250058]: 2025-12-01 22:53:25.857134734 +0000 UTC m=+0.121736424 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 22:53:29 compute-0 nova_compute[189508]: 2025-12-01 22:53:29.739 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:29 compute-0 podman[203693]: time="2025-12-01T22:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:53:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:53:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  1 22:53:30 compute-0 nova_compute[189508]: 2025-12-01 22:53:30.362 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:30 compute-0 podman[250079]: 2025-12-01 22:53:30.824888515 +0000 UTC m=+0.100676771 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec  1 22:53:30 compute-0 podman[250078]: 2025-12-01 22:53:30.901988941 +0000 UTC m=+0.180745081 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:53:31 compute-0 openstack_network_exporter[205887]: ERROR   22:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:53:31 compute-0 openstack_network_exporter[205887]: ERROR   22:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:53:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:53:31 compute-0 openstack_network_exporter[205887]: ERROR   22:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:53:31 compute-0 openstack_network_exporter[205887]: ERROR   22:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:53:31 compute-0 openstack_network_exporter[205887]: ERROR   22:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:53:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:53:34 compute-0 nova_compute[189508]: 2025-12-01 22:53:34.742 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.272 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.274 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.279 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.280 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.282 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:53:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:53:35 compute-0 nova_compute[189508]: 2025-12-01 22:53:35.367 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:35 compute-0 podman[250121]: 2025-12-01 22:53:35.831797251 +0000 UTC m=+0.090853814 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350)
Dec  1 22:53:35 compute-0 podman[250119]: 2025-12-01 22:53:35.847930937 +0000 UTC m=+0.123044683 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 22:53:35 compute-0 podman[250120]: 2025-12-01 22:53:35.852925807 +0000 UTC m=+0.115091978 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:53:35 compute-0 podman[250122]: 2025-12-01 22:53:35.878694805 +0000 UTC m=+0.131498412 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec  1 22:53:39 compute-0 nova_compute[189508]: 2025-12-01 22:53:39.745 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:40 compute-0 nova_compute[189508]: 2025-12-01 22:53:40.368 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:42 compute-0 nova_compute[189508]: 2025-12-01 22:53:42.243 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:44 compute-0 nova_compute[189508]: 2025-12-01 22:53:44.752 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:45 compute-0 nova_compute[189508]: 2025-12-01 22:53:45.373 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:47 compute-0 nova_compute[189508]: 2025-12-01 22:53:47.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:48 compute-0 nova_compute[189508]: 2025-12-01 22:53:48.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:49 compute-0 nova_compute[189508]: 2025-12-01 22:53:49.760 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:50 compute-0 nova_compute[189508]: 2025-12-01 22:53:50.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:50 compute-0 nova_compute[189508]: 2025-12-01 22:53:50.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:53:50 compute-0 nova_compute[189508]: 2025-12-01 22:53:50.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:53:50 compute-0 nova_compute[189508]: 2025-12-01 22:53:50.224 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:53:50 compute-0 nova_compute[189508]: 2025-12-01 22:53:50.376 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:51 compute-0 nova_compute[189508]: 2025-12-01 22:53:51.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:51 compute-0 nova_compute[189508]: 2025-12-01 22:53:51.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:51 compute-0 nova_compute[189508]: 2025-12-01 22:53:51.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:53:52 compute-0 podman[250198]: 2025-12-01 22:53:52.840919261 +0000 UTC m=+0.116573151 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:53:54 compute-0 nova_compute[189508]: 2025-12-01 22:53:54.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:54 compute-0 nova_compute[189508]: 2025-12-01 22:53:54.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:54 compute-0 nova_compute[189508]: 2025-12-01 22:53:54.763 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:54 compute-0 podman[250222]: 2025-12-01 22:53:54.805152839 +0000 UTC m=+0.084746052 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.232 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.232 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.232 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.232 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.378 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.646 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.647 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5384MB free_disk=72.19720077514648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.647 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.649 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.719 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.720 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.744 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.760 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.761 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:53:55 compute-0 nova_compute[189508]: 2025-12-01 22:53:55.761 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:53:56 compute-0 podman[250240]: 2025-12-01 22:53:56.867141508 +0000 UTC m=+0.136754551 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:53:59 compute-0 podman[203693]: time="2025-12-01T22:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:53:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:53:59 compute-0 nova_compute[189508]: 2025-12-01 22:53:59.766 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:53:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec  1 22:54:00 compute-0 nova_compute[189508]: 2025-12-01 22:54:00.382 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:01 compute-0 openstack_network_exporter[205887]: ERROR   22:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:54:01 compute-0 openstack_network_exporter[205887]: ERROR   22:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:54:01 compute-0 openstack_network_exporter[205887]: ERROR   22:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:54:01 compute-0 openstack_network_exporter[205887]: ERROR   22:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:54:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:54:01 compute-0 openstack_network_exporter[205887]: ERROR   22:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:54:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:54:01 compute-0 podman[250262]: 2025-12-01 22:54:01.884489757 +0000 UTC m=+0.147015549 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  1 22:54:01 compute-0 podman[250261]: 2025-12-01 22:54:01.922056427 +0000 UTC m=+0.191973948 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  1 22:54:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:54:04.635 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:54:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:54:04.636 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:54:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:54:04.636 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:54:04 compute-0 nova_compute[189508]: 2025-12-01 22:54:04.770 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:05 compute-0 nova_compute[189508]: 2025-12-01 22:54:05.385 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:06 compute-0 podman[250303]: 2025-12-01 22:54:06.838517851 +0000 UTC m=+0.111877448 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:54:06 compute-0 podman[250304]: 2025-12-01 22:54:06.850833408 +0000 UTC m=+0.108366578 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec  1 22:54:06 compute-0 podman[250310]: 2025-12-01 22:54:06.859652127 +0000 UTC m=+0.117216208 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Dec  1 22:54:06 compute-0 podman[250305]: 2025-12-01 22:54:06.859829252 +0000 UTC m=+0.114194423 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Dec  1 22:54:09 compute-0 nova_compute[189508]: 2025-12-01 22:54:09.773 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:10 compute-0 nova_compute[189508]: 2025-12-01 22:54:10.388 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:14 compute-0 nova_compute[189508]: 2025-12-01 22:54:14.775 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:15 compute-0 nova_compute[189508]: 2025-12-01 22:54:15.390 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:19 compute-0 nova_compute[189508]: 2025-12-01 22:54:19.778 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:20 compute-0 nova_compute[189508]: 2025-12-01 22:54:20.392 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:23 compute-0 podman[250385]: 2025-12-01 22:54:23.819175584 +0000 UTC m=+0.100270491 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:54:24 compute-0 nova_compute[189508]: 2025-12-01 22:54:24.781 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:25 compute-0 nova_compute[189508]: 2025-12-01 22:54:25.397 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:25 compute-0 podman[250408]: 2025-12-01 22:54:25.843442488 +0000 UTC m=+0.109998865 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:54:27 compute-0 podman[250427]: 2025-12-01 22:54:27.848081867 +0000 UTC m=+0.124179625 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 22:54:29 compute-0 podman[203693]: time="2025-12-01T22:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:54:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:54:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4329 "" "Go-http-client/1.1"
Dec  1 22:54:29 compute-0 nova_compute[189508]: 2025-12-01 22:54:29.784 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:30 compute-0 nova_compute[189508]: 2025-12-01 22:54:30.399 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:31 compute-0 openstack_network_exporter[205887]: ERROR   22:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:54:31 compute-0 openstack_network_exporter[205887]: ERROR   22:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:54:31 compute-0 openstack_network_exporter[205887]: ERROR   22:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:54:31 compute-0 openstack_network_exporter[205887]: ERROR   22:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:54:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:54:31 compute-0 openstack_network_exporter[205887]: ERROR   22:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:54:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:54:32 compute-0 podman[250448]: 2025-12-01 22:54:32.835268007 +0000 UTC m=+0.105677133 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 22:54:32 compute-0 podman[250447]: 2025-12-01 22:54:32.887654655 +0000 UTC m=+0.161614341 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 22:54:34 compute-0 nova_compute[189508]: 2025-12-01 22:54:34.787 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:35 compute-0 nova_compute[189508]: 2025-12-01 22:54:35.400 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:37 compute-0 podman[250491]: 2025-12-01 22:54:37.804630341 +0000 UTC m=+0.073896806 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:54:37 compute-0 podman[250490]: 2025-12-01 22:54:37.817473883 +0000 UTC m=+0.093283163 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:54:37 compute-0 podman[250498]: 2025-12-01 22:54:37.825215741 +0000 UTC m=+0.080523342 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container)
Dec  1 22:54:37 compute-0 podman[250492]: 2025-12-01 22:54:37.848837288 +0000 UTC m=+0.106799484 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 22:54:39 compute-0 nova_compute[189508]: 2025-12-01 22:54:39.789 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:40 compute-0 nova_compute[189508]: 2025-12-01 22:54:40.404 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:43 compute-0 nova_compute[189508]: 2025-12-01 22:54:43.758 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:43 compute-0 nova_compute[189508]: 2025-12-01 22:54:43.758 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:44 compute-0 nova_compute[189508]: 2025-12-01 22:54:44.791 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:45 compute-0 nova_compute[189508]: 2025-12-01 22:54:45.406 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:48 compute-0 nova_compute[189508]: 2025-12-01 22:54:48.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:49 compute-0 nova_compute[189508]: 2025-12-01 22:54:49.794 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:50 compute-0 nova_compute[189508]: 2025-12-01 22:54:50.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:50 compute-0 nova_compute[189508]: 2025-12-01 22:54:50.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:54:50 compute-0 nova_compute[189508]: 2025-12-01 22:54:50.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:54:50 compute-0 nova_compute[189508]: 2025-12-01 22:54:50.227 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:54:50 compute-0 nova_compute[189508]: 2025-12-01 22:54:50.228 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:50 compute-0 nova_compute[189508]: 2025-12-01 22:54:50.409 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:52 compute-0 nova_compute[189508]: 2025-12-01 22:54:52.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:52 compute-0 nova_compute[189508]: 2025-12-01 22:54:52.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:52 compute-0 nova_compute[189508]: 2025-12-01 22:54:52.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:54:54 compute-0 nova_compute[189508]: 2025-12-01 22:54:54.797 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:54 compute-0 podman[250567]: 2025-12-01 22:54:54.840210954 +0000 UTC m=+0.104215992 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 22:54:55 compute-0 nova_compute[189508]: 2025-12-01 22:54:55.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:55 compute-0 nova_compute[189508]: 2025-12-01 22:54:55.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:55 compute-0 nova_compute[189508]: 2025-12-01 22:54:55.412 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:54:56 compute-0 podman[250591]: 2025-12-01 22:54:56.869067155 +0000 UTC m=+0.141987297 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.243 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.244 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.244 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.245 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.780 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.783 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5397MB free_disk=72.19720077514648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.784 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.785 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.885 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.887 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.928 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.944 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.947 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:54:57 compute-0 nova_compute[189508]: 2025-12-01 22:54:57.948 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:54:58 compute-0 podman[250608]: 2025-12-01 22:54:58.809427991 +0000 UTC m=+0.091652037 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  1 22:54:59 compute-0 podman[203693]: time="2025-12-01T22:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:54:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:54:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4329 "" "Go-http-client/1.1"
Dec  1 22:54:59 compute-0 nova_compute[189508]: 2025-12-01 22:54:59.799 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:00 compute-0 nova_compute[189508]: 2025-12-01 22:55:00.415 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:01 compute-0 openstack_network_exporter[205887]: ERROR   22:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:55:01 compute-0 openstack_network_exporter[205887]: ERROR   22:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:55:01 compute-0 openstack_network_exporter[205887]: ERROR   22:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:55:01 compute-0 openstack_network_exporter[205887]: ERROR   22:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:55:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:55:01 compute-0 openstack_network_exporter[205887]: ERROR   22:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:55:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:55:03 compute-0 podman[250628]: 2025-12-01 22:55:03.785240129 +0000 UTC m=+0.059269933 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:55:03 compute-0 podman[250627]: 2025-12-01 22:55:03.824938349 +0000 UTC m=+0.106494466 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:55:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:55:04.637 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:55:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:55:04.637 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:55:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:55:04.637 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:55:04 compute-0 nova_compute[189508]: 2025-12-01 22:55:04.802 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:05 compute-0 nova_compute[189508]: 2025-12-01 22:55:05.417 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:08 compute-0 podman[250670]: 2025-12-01 22:55:08.780043113 +0000 UTC m=+0.063411001 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:55:08 compute-0 podman[250671]: 2025-12-01 22:55:08.820405022 +0000 UTC m=+0.095254780 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:55:08 compute-0 podman[250672]: 2025-12-01 22:55:08.825922437 +0000 UTC m=+0.097280976 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Dec  1 22:55:08 compute-0 podman[250673]: 2025-12-01 22:55:08.833502371 +0000 UTC m=+0.091639466 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, distribution-scope=public, release=1214.1726694543, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible)
Dec  1 22:55:09 compute-0 nova_compute[189508]: 2025-12-01 22:55:09.804 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:10 compute-0 nova_compute[189508]: 2025-12-01 22:55:10.420 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:14 compute-0 nova_compute[189508]: 2025-12-01 22:55:14.807 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:15 compute-0 nova_compute[189508]: 2025-12-01 22:55:15.432 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:55:15.883 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:55:15 compute-0 nova_compute[189508]: 2025-12-01 22:55:15.884 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:15 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:55:15.884 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:55:19 compute-0 nova_compute[189508]: 2025-12-01 22:55:19.811 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:55:19.889 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:55:20 compute-0 nova_compute[189508]: 2025-12-01 22:55:20.434 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:24 compute-0 nova_compute[189508]: 2025-12-01 22:55:24.815 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:25 compute-0 nova_compute[189508]: 2025-12-01 22:55:25.438 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:25 compute-0 podman[250750]: 2025-12-01 22:55:25.827070188 +0000 UTC m=+0.104959392 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:55:27 compute-0 podman[250773]: 2025-12-01 22:55:27.817895087 +0000 UTC m=+0.092323696 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 22:55:29 compute-0 podman[203693]: time="2025-12-01T22:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:55:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:55:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Dec  1 22:55:29 compute-0 nova_compute[189508]: 2025-12-01 22:55:29.817 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:29 compute-0 podman[250792]: 2025-12-01 22:55:29.823750531 +0000 UTC m=+0.106085484 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 22:55:30 compute-0 nova_compute[189508]: 2025-12-01 22:55:30.441 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:31 compute-0 openstack_network_exporter[205887]: ERROR   22:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:55:31 compute-0 openstack_network_exporter[205887]: ERROR   22:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:55:31 compute-0 openstack_network_exporter[205887]: ERROR   22:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:55:31 compute-0 openstack_network_exporter[205887]: ERROR   22:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:55:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:55:31 compute-0 openstack_network_exporter[205887]: ERROR   22:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:55:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:55:34 compute-0 nova_compute[189508]: 2025-12-01 22:55:34.820 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:34 compute-0 podman[250812]: 2025-12-01 22:55:34.830978556 +0000 UTC m=+0.092751298 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:55:34 compute-0 podman[250811]: 2025-12-01 22:55:34.856079494 +0000 UTC m=+0.131978495 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.273 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.274 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.274 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.283 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.284 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.294 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:55:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:55:35 compute-0 nova_compute[189508]: 2025-12-01 22:55:35.446 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:39 compute-0 nova_compute[189508]: 2025-12-01 22:55:39.823 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:39 compute-0 podman[250855]: 2025-12-01 22:55:39.829729751 +0000 UTC m=+0.096684259 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:55:39 compute-0 podman[250856]: 2025-12-01 22:55:39.844711924 +0000 UTC m=+0.120789499 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  1 22:55:39 compute-0 podman[250857]: 2025-12-01 22:55:39.857448323 +0000 UTC m=+0.110914330 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, release=1755695350, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, config_id=edpm, maintainer=Red Hat, Inc.)
Dec  1 22:55:39 compute-0 podman[250863]: 2025-12-01 22:55:39.870035028 +0000 UTC m=+0.120403928 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Dec  1 22:55:40 compute-0 nova_compute[189508]: 2025-12-01 22:55:40.449 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:43 compute-0 nova_compute[189508]: 2025-12-01 22:55:43.944 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:44 compute-0 nova_compute[189508]: 2025-12-01 22:55:44.826 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:45 compute-0 nova_compute[189508]: 2025-12-01 22:55:45.454 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:46 compute-0 ovn_controller[97770]: 2025-12-01T22:55:46Z|00065|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Dec  1 22:55:49 compute-0 nova_compute[189508]: 2025-12-01 22:55:49.829 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:50 compute-0 nova_compute[189508]: 2025-12-01 22:55:50.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:50 compute-0 nova_compute[189508]: 2025-12-01 22:55:50.459 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:52 compute-0 nova_compute[189508]: 2025-12-01 22:55:52.085 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:52 compute-0 nova_compute[189508]: 2025-12-01 22:55:52.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:52 compute-0 nova_compute[189508]: 2025-12-01 22:55:52.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:55:52 compute-0 nova_compute[189508]: 2025-12-01 22:55:52.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:55:52 compute-0 nova_compute[189508]: 2025-12-01 22:55:52.216 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 22:55:52 compute-0 nova_compute[189508]: 2025-12-01 22:55:52.216 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:52 compute-0 nova_compute[189508]: 2025-12-01 22:55:52.216 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:54 compute-0 nova_compute[189508]: 2025-12-01 22:55:54.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:54 compute-0 nova_compute[189508]: 2025-12-01 22:55:54.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:55:54 compute-0 nova_compute[189508]: 2025-12-01 22:55:54.452 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:54 compute-0 nova_compute[189508]: 2025-12-01 22:55:54.831 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:55 compute-0 nova_compute[189508]: 2025-12-01 22:55:55.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:55 compute-0 nova_compute[189508]: 2025-12-01 22:55:55.224 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:55 compute-0 nova_compute[189508]: 2025-12-01 22:55:55.345 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:55 compute-0 nova_compute[189508]: 2025-12-01 22:55:55.461 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:56 compute-0 podman[250933]: 2025-12-01 22:55:56.838708754 +0000 UTC m=+0.105485127 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.239 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.572 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.573 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5396MB free_disk=72.1971206665039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.574 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.574 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.720 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.721 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.800 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.821 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.823 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:55:57 compute-0 nova_compute[189508]: 2025-12-01 22:55:57.823 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:55:58 compute-0 nova_compute[189508]: 2025-12-01 22:55:58.735 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:55:58 compute-0 podman[250957]: 2025-12-01 22:55:58.81806734 +0000 UTC m=+0.079399841 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:55:59 compute-0 podman[203693]: time="2025-12-01T22:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:55:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 22:55:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Dec  1 22:55:59 compute-0 nova_compute[189508]: 2025-12-01 22:55:59.834 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:00 compute-0 nova_compute[189508]: 2025-12-01 22:56:00.174 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:00 compute-0 nova_compute[189508]: 2025-12-01 22:56:00.464 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:00 compute-0 podman[250975]: 2025-12-01 22:56:00.840892801 +0000 UTC m=+0.104453588 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  1 22:56:01 compute-0 openstack_network_exporter[205887]: ERROR   22:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:56:01 compute-0 openstack_network_exporter[205887]: ERROR   22:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:56:01 compute-0 openstack_network_exporter[205887]: ERROR   22:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:56:01 compute-0 openstack_network_exporter[205887]: ERROR   22:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:56:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:56:01 compute-0 openstack_network_exporter[205887]: ERROR   22:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:56:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:56:03 compute-0 nova_compute[189508]: 2025-12-01 22:56:03.399 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:03 compute-0 nova_compute[189508]: 2025-12-01 22:56:03.504 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:03 compute-0 nova_compute[189508]: 2025-12-01 22:56:03.585 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.557 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.558 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.578 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:56:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:04.638 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:04.639 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:04.640 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.677 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.678 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.688 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.689 189512 INFO nova.compute.claims [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.819 189512 DEBUG nova.compute.provider_tree [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.837 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.839 189512 DEBUG nova.scheduler.client.report [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.865 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.866 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.919 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.919 189512 DEBUG nova.network.neutron [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.951 189512 INFO nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:56:04 compute-0 nova_compute[189508]: 2025-12-01 22:56:04.968 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.073 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.074 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.075 189512 INFO nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Creating image(s)#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.075 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "/var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.076 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "/var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.076 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "/var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.077 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.078 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.389 189512 DEBUG nova.policy [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2d96ce1170a34f538a6b777063374e7d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5188137218bd444b9e92a1299207f297', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:56:05 compute-0 nova_compute[189508]: 2025-12-01 22:56:05.467 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:05 compute-0 podman[250995]: 2025-12-01 22:56:05.836995442 +0000 UTC m=+0.114486991 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  1 22:56:05 compute-0 podman[250994]: 2025-12-01 22:56:05.855072032 +0000 UTC m=+0.137796979 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:56:07 compute-0 nova_compute[189508]: 2025-12-01 22:56:07.531 189512 DEBUG nova.network.neutron [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Successfully created port: c3cfec72-c837-4139-9b78-a9e2dea166e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:56:07 compute-0 nova_compute[189508]: 2025-12-01 22:56:07.661 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:07 compute-0 nova_compute[189508]: 2025-12-01 22:56:07.692 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:07 compute-0 nova_compute[189508]: 2025-12-01 22:56:07.792 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:07 compute-0 nova_compute[189508]: 2025-12-01 22:56:07.802 189512 DEBUG nova.virt.images [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] 74bb08bf-1799-4930-aad4-d505f26ff5f4 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 22:56:07 compute-0 nova_compute[189508]: 2025-12-01 22:56:07.809 189512 DEBUG nova.privsep.utils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 22:56:07 compute-0 nova_compute[189508]: 2025-12-01 22:56:07.812 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.part /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.075 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.part /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.converted" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.086 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.154 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270.converted --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.157 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.188 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.276 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.277 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.278 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.293 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.375 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.376 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.443 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk 1073741824" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.445 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.446 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.512 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.514 189512 DEBUG nova.virt.disk.api [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Checking if we can resize image /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.522 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.583 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.585 189512 DEBUG nova.virt.disk.api [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Cannot resize image /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.586 189512 DEBUG nova.objects.instance [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lazy-loading 'migration_context' on Instance uuid 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.600 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.600 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Ensure instance console log exists: /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.601 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.601 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:08 compute-0 nova_compute[189508]: 2025-12-01 22:56:08.601 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.036 189512 DEBUG nova.network.neutron [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Successfully updated port: c3cfec72-c837-4139-9b78-a9e2dea166e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.060 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "refresh_cache-86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.061 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquired lock "refresh_cache-86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.061 189512 DEBUG nova.network.neutron [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.281 189512 DEBUG nova.network.neutron [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.453 189512 DEBUG nova.compute.manager [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received event network-changed-c3cfec72-c837-4139-9b78-a9e2dea166e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.454 189512 DEBUG nova.compute.manager [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Refreshing instance network info cache due to event network-changed-c3cfec72-c837-4139-9b78-a9e2dea166e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.454 189512 DEBUG oslo_concurrency.lockutils [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:09 compute-0 nova_compute[189508]: 2025-12-01 22:56:09.840 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:10 compute-0 nova_compute[189508]: 2025-12-01 22:56:10.469 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:10 compute-0 podman[251066]: 2025-12-01 22:56:10.836778097 +0000 UTC m=+0.112174936 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 22:56:10 compute-0 podman[251067]: 2025-12-01 22:56:10.853013455 +0000 UTC m=+0.120016898 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 22:56:10 compute-0 podman[251065]: 2025-12-01 22:56:10.877789454 +0000 UTC m=+0.146209687 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:56:10 compute-0 podman[251073]: 2025-12-01 22:56:10.879165402 +0000 UTC m=+0.136125611 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.114 189512 DEBUG nova.network.neutron [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Updating instance_info_cache with network_info: [{"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.139 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Releasing lock "refresh_cache-86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.139 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Instance network_info: |[{"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.141 189512 DEBUG oslo_concurrency.lockutils [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.141 189512 DEBUG nova.network.neutron [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Refreshing network info cache for port c3cfec72-c837-4139-9b78-a9e2dea166e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.147 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Start _get_guest_xml network_info=[{"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.171 189512 WARNING nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.181 189512 DEBUG nova.virt.libvirt.host [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.182 189512 DEBUG nova.virt.libvirt.host [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.189 189512 DEBUG nova.virt.libvirt.host [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.191 189512 DEBUG nova.virt.libvirt.host [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.192 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.193 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.195 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.196 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.197 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.198 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.199 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.200 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.201 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.201 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.202 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.203 189512 DEBUG nova.virt.hardware [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.213 189512 DEBUG nova.virt.libvirt.vif [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1030745659',display_name='tempest-ServerAddressesTestJSON-server-1030745659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1030745659',id=6,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5188137218bd444b9e92a1299207f297',ramdisk_id='',reservation_id='r-m2niz0rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-529319613',owner_user_name='tempest-ServerAddressesTestJSON-529319613-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:05Z,user_data=None,user_id='2d96ce1170a34f538a6b777063374e7d',uuid=86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.214 189512 DEBUG nova.network.os_vif_util [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Converting VIF {"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.217 189512 DEBUG nova.network.os_vif_util [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:8e:24,bridge_name='br-int',has_traffic_filtering=True,id=c3cfec72-c837-4139-9b78-a9e2dea166e8,network=Network(2573f610-2d06-4add-a22c-f90f61f3a95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3cfec72-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.221 189512 DEBUG nova.objects.instance [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.240 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <uuid>86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd</uuid>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <name>instance-00000006</name>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <nova:name>tempest-ServerAddressesTestJSON-server-1030745659</nova:name>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:56:11</nova:creationTime>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:user uuid="2d96ce1170a34f538a6b777063374e7d">tempest-ServerAddressesTestJSON-529319613-project-member</nova:user>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:project uuid="5188137218bd444b9e92a1299207f297">tempest-ServerAddressesTestJSON-529319613</nova:project>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        <nova:port uuid="c3cfec72-c837-4139-9b78-a9e2dea166e8">
Dec  1 22:56:11 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <system>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <entry name="serial">86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd</entry>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <entry name="uuid">86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd</entry>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </system>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <os>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  </os>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <features>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  </features>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk.config"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:66:8e:24"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <target dev="tapc3cfec72-c8"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/console.log" append="off"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <video>
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </video>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:56:11 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:56:11 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:56:11 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:56:11 compute-0 nova_compute[189508]: </domain>
Dec  1 22:56:11 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.242 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Preparing to wait for external event network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.242 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.243 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.243 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.244 189512 DEBUG nova.virt.libvirt.vif [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1030745659',display_name='tempest-ServerAddressesTestJSON-server-1030745659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1030745659',id=6,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5188137218bd444b9e92a1299207f297',ramdisk_id='',reservation_id='r-m2niz0rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-529319613',owner_user_name='tempest-ServerAddressesTestJSON-529319613-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:05Z,user_data=None,user_id='2d96ce1170a34f538a6b777063374e7d',uuid=86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.245 189512 DEBUG nova.network.os_vif_util [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Converting VIF {"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.246 189512 DEBUG nova.network.os_vif_util [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:8e:24,bridge_name='br-int',has_traffic_filtering=True,id=c3cfec72-c837-4139-9b78-a9e2dea166e8,network=Network(2573f610-2d06-4add-a22c-f90f61f3a95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3cfec72-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.247 189512 DEBUG os_vif [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:8e:24,bridge_name='br-int',has_traffic_filtering=True,id=c3cfec72-c837-4139-9b78-a9e2dea166e8,network=Network(2573f610-2d06-4add-a22c-f90f61f3a95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3cfec72-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.249 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.250 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.251 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.258 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.259 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3cfec72-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.260 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc3cfec72-c8, col_values=(('external_ids', {'iface-id': 'c3cfec72-c837-4139-9b78-a9e2dea166e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:8e:24', 'vm-uuid': '86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.263 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:11 compute-0 NetworkManager[56278]: <info>  [1764629771.2650] manager: (tapc3cfec72-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.267 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.273 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.275 189512 INFO os_vif [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:8e:24,bridge_name='br-int',has_traffic_filtering=True,id=c3cfec72-c837-4139-9b78-a9e2dea166e8,network=Network(2573f610-2d06-4add-a22c-f90f61f3a95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3cfec72-c8')#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.372 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.373 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.374 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] No VIF found with MAC fa:16:3e:66:8e:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.376 189512 INFO nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Using config drive#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.828 189512 INFO nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Creating config drive at /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk.config#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.838 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_92wqo3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:11 compute-0 nova_compute[189508]: 2025-12-01 22:56:11.963 189512 DEBUG oslo_concurrency.processutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_92wqo3j" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:12 compute-0 kernel: tapc3cfec72-c8: entered promiscuous mode
Dec  1 22:56:12 compute-0 NetworkManager[56278]: <info>  [1764629772.0568] manager: (tapc3cfec72-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec  1 22:56:12 compute-0 ovn_controller[97770]: 2025-12-01T22:56:12Z|00066|binding|INFO|Claiming lport c3cfec72-c837-4139-9b78-a9e2dea166e8 for this chassis.
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.059 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:12 compute-0 ovn_controller[97770]: 2025-12-01T22:56:12Z|00067|binding|INFO|c3cfec72-c837-4139-9b78-a9e2dea166e8: Claiming fa:16:3e:66:8e:24 10.100.0.8
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.066 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:8e:24 10.100.0.8'], port_security=['fa:16:3e:66:8e:24 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2573f610-2d06-4add-a22c-f90f61f3a95a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5188137218bd444b9e92a1299207f297', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fee585f9-2f59-4dfc-a390-e2fe7beb50b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78fd76c2-4096-4114-82fa-20be870e0268, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=c3cfec72-c837-4139-9b78-a9e2dea166e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.067 106662 INFO neutron.agent.ovn.metadata.agent [-] Port c3cfec72-c837-4139-9b78-a9e2dea166e8 in datapath 2573f610-2d06-4add-a22c-f90f61f3a95a bound to our chassis#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.068 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2573f610-2d06-4add-a22c-f90f61f3a95a#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.083 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb18924-288f-4fcd-8864-721c513c5c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.085 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2573f610-21 in ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.087 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2573f610-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.088 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1afd40-0bb2-4e1c-86e4-9abf628a81ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_controller[97770]: 2025-12-01T22:56:12Z|00068|binding|INFO|Setting lport c3cfec72-c837-4139-9b78-a9e2dea166e8 ovn-installed in OVS
Dec  1 22:56:12 compute-0 ovn_controller[97770]: 2025-12-01T22:56:12Z|00069|binding|INFO|Setting lport c3cfec72-c837-4139-9b78-a9e2dea166e8 up in Southbound
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.089 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2fb4ae-40ac-45f2-8a18-9a2d71e47533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.091 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.098 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.105 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[d485078a-f798-427a-b661-791c362c943b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 systemd-machined[155759]: New machine qemu-6-instance-00000006.
Dec  1 22:56:12 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.143 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[35111f2c-82be-4b92-b0e7-9da1ca335c21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 systemd-udevd[251170]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:56:12 compute-0 NetworkManager[56278]: <info>  [1764629772.1764] device (tapc3cfec72-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:56:12 compute-0 NetworkManager[56278]: <info>  [1764629772.1776] device (tapc3cfec72-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.190 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[81cc6b64-f2d8-4374-99d6-4e2d102b2a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.196 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8633f729-d374-4e90-9f48-3ce1d53a4292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 NetworkManager[56278]: <info>  [1764629772.1977] manager: (tap2573f610-20): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.232 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f815d0-55c9-443c-a303-1e86ea32bfe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.236 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[518929e9-a877-4b1d-99b8-a1f9f578472e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 NetworkManager[56278]: <info>  [1764629772.2597] device (tap2573f610-20): carrier: link connected
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.267 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[104540e8-93c4-4170-afbb-901fc463b569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.286 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[76c86e7e-7788-4d4b-999b-8654bd17b584]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2573f610-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:c3:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528182, 'reachable_time': 34476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251199, 'error': None, 'target': 'ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.298 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[855b62b7-4b0e-46df-8818-a2100ebfe732]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:c328'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528182, 'tstamp': 528182}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251200, 'error': None, 'target': 'ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.316 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c47c7b65-73c5-403a-beae-62551b358bdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2573f610-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:c3:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528182, 'reachable_time': 34476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251201, 'error': None, 'target': 'ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.347 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8828c045-c0bc-4fb2-a35f-099841bed8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.408 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f15a7320-5f67-410b-90f1-ea32ca62a0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.410 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2573f610-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.410 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.410 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2573f610-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:12 compute-0 kernel: tap2573f610-20: entered promiscuous mode
Dec  1 22:56:12 compute-0 NetworkManager[56278]: <info>  [1764629772.4134] manager: (tap2573f610-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.412 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.415 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2573f610-20, col_values=(('external_ids', {'iface-id': 'cb337fb3-12ea-44e8-97d8-0cb3546f35a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:12 compute-0 ovn_controller[97770]: 2025-12-01T22:56:12Z|00070|binding|INFO|Releasing lport cb337fb3-12ea-44e8-97d8-0cb3546f35a6 from this chassis (sb_readonly=0)
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.420 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2573f610-2d06-4add-a22c-f90f61f3a95a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2573f610-2d06-4add-a22c-f90f61f3a95a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.421 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[de389846-ecb5-4a88-9672-72a17d3631ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.422 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-2573f610-2d06-4add-a22c-f90f61f3a95a
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/2573f610-2d06-4add-a22c-f90f61f3a95a.pid.haproxy
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID 2573f610-2d06-4add-a22c-f90f61f3a95a
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:56:12 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:12.423 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a', 'env', 'PROCESS_TAG=haproxy-2573f610-2d06-4add-a22c-f90f61f3a95a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2573f610-2d06-4add-a22c-f90f61f3a95a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.431 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.643 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629772.6429174, 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.644 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] VM Started (Lifecycle Event)#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.671 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.678 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629772.64304, 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.678 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.707 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.712 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.731 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.749 189512 DEBUG nova.network.neutron [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Updated VIF entry in instance network info cache for port c3cfec72-c837-4139-9b78-a9e2dea166e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.750 189512 DEBUG nova.network.neutron [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Updating instance_info_cache with network_info: [{"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:12 compute-0 nova_compute[189508]: 2025-12-01 22:56:12.764 189512 DEBUG oslo_concurrency.lockutils [req-86ae4791-eba5-4b80-8a38-385d0491dd92 req-c44f48ff-c22a-44d9-8dae-86458bb44522 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:12 compute-0 podman[251240]: 2025-12-01 22:56:12.94008232 +0000 UTC m=+0.076833279 container create b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:56:12 compute-0 systemd[1]: Started libpod-conmon-b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0.scope.
Dec  1 22:56:12 compute-0 podman[251240]: 2025-12-01 22:56:12.902991034 +0000 UTC m=+0.039742023 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.026 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "691446f5-d3d8-4a4f-a161-f2058a04a59d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.029 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:13 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:56:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1795b7c225f302aab885d84e41f01e79f7412765c29a09ed10840adde455ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.062 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:56:13 compute-0 podman[251240]: 2025-12-01 22:56:13.071063106 +0000 UTC m=+0.207814095 container init b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 22:56:13 compute-0 podman[251240]: 2025-12-01 22:56:13.080613295 +0000 UTC m=+0.217364254 container start b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:56:13 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [NOTICE]   (251259) : New worker (251261) forked
Dec  1 22:56:13 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [NOTICE]   (251259) : Loading success.
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.186 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.187 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.197 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.197 189512 INFO nova.compute.claims [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.359 189512 DEBUG nova.compute.provider_tree [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.376 189512 DEBUG nova.scheduler.client.report [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.400 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.401 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.464 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.465 189512 DEBUG nova.network.neutron [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.491 189512 INFO nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.517 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.627 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.630 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.631 189512 INFO nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Creating image(s)#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.632 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "/var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.633 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "/var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.635 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "/var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.658 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.759 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.761 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.765 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.795 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.876 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.879 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.941 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.942 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.943 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:13 compute-0 nova_compute[189508]: 2025-12-01 22:56:13.966 189512 DEBUG nova.policy [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9177a32b390447b1acbb338fbf90b4bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5dde91941cac4081b671670d9a874621', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.016 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.017 189512 DEBUG nova.virt.disk.api [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Checking if we can resize image /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.018 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.080 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.081 189512 DEBUG nova.virt.disk.api [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Cannot resize image /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.082 189512 DEBUG nova.objects.instance [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lazy-loading 'migration_context' on Instance uuid 691446f5-d3d8-4a4f-a161-f2058a04a59d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.100 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.101 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Ensure instance console log exists: /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.102 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.103 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.103 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:14 compute-0 nova_compute[189508]: 2025-12-01 22:56:14.647 189512 DEBUG nova.network.neutron [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Successfully created port: 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:56:15 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 22:56:15 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 22:56:15 compute-0 nova_compute[189508]: 2025-12-01 22:56:15.473 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:15 compute-0 nova_compute[189508]: 2025-12-01 22:56:15.685 189512 DEBUG nova.network.neutron [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Successfully updated port: 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:56:15 compute-0 nova_compute[189508]: 2025-12-01 22:56:15.708 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:15 compute-0 nova_compute[189508]: 2025-12-01 22:56:15.709 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:15 compute-0 nova_compute[189508]: 2025-12-01 22:56:15.709 189512 DEBUG nova.network.neutron [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:56:15 compute-0 nova_compute[189508]: 2025-12-01 22:56:15.987 189512 DEBUG nova.network.neutron [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:56:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:16.238 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:56:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:16.239 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.243 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.263 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.563 189512 DEBUG nova.compute.manager [req-28069729-a30c-4101-9c7f-f4a1d0b5da66 req-966233b7-05c3-428e-9db8-3b6d35798271 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received event network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.564 189512 DEBUG oslo_concurrency.lockutils [req-28069729-a30c-4101-9c7f-f4a1d0b5da66 req-966233b7-05c3-428e-9db8-3b6d35798271 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.564 189512 DEBUG oslo_concurrency.lockutils [req-28069729-a30c-4101-9c7f-f4a1d0b5da66 req-966233b7-05c3-428e-9db8-3b6d35798271 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.564 189512 DEBUG oslo_concurrency.lockutils [req-28069729-a30c-4101-9c7f-f4a1d0b5da66 req-966233b7-05c3-428e-9db8-3b6d35798271 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.564 189512 DEBUG nova.compute.manager [req-28069729-a30c-4101-9c7f-f4a1d0b5da66 req-966233b7-05c3-428e-9db8-3b6d35798271 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Processing event network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.565 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.573 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629776.572228, 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.574 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.576 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.583 189512 INFO nova.virt.libvirt.driver [-] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Instance spawned successfully.#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.584 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.601 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.614 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.622 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.623 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.624 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.625 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.625 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.626 189512 DEBUG nova.virt.libvirt.driver [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.638 189512 DEBUG nova.compute.manager [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.638 189512 DEBUG nova.compute.manager [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing instance network info cache due to event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.639 189512 DEBUG oslo_concurrency.lockutils [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.641 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.704 189512 INFO nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Took 11.63 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.705 189512 DEBUG nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.780 189512 INFO nova.compute.manager [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Took 12.13 seconds to build instance.#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.806 189512 DEBUG oslo_concurrency.lockutils [None req-ff4e9cab-6951-48c9-ad99-1166093498ad 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.952 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.952 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:16 compute-0 nova_compute[189508]: 2025-12-01 22:56:16.981 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.064 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.065 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.077 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.078 189512 INFO nova.compute.claims [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.247 189512 DEBUG nova.compute.provider_tree [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.278 189512 DEBUG nova.scheduler.client.report [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.310 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.312 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.383 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.385 189512 DEBUG nova.network.neutron [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.407 189512 INFO nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.430 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.541 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.548 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.549 189512 INFO nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Creating image(s)#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.550 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "/var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.551 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "/var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.552 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "/var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.576 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.674 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.675 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.676 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.693 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.759 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.760 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.799 189512 DEBUG nova.policy [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '964f63f357b7496c959106655fdc82c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3434d463800f4b268c2f67e9278a65ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.824 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk 1073741824" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.825 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.826 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.850 189512 DEBUG nova.network.neutron [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.878 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.879 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Instance network_info: |[{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.879 189512 DEBUG oslo_concurrency.lockutils [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.880 189512 DEBUG nova.network.neutron [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing network info cache for port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.884 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Start _get_guest_xml network_info=[{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.897 189512 WARNING nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.908 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.909 189512 DEBUG nova.virt.disk.api [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Checking if we can resize image /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.909 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.937 189512 DEBUG nova.virt.libvirt.host [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.938 189512 DEBUG nova.virt.libvirt.host [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.948 189512 DEBUG nova.virt.libvirt.host [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.949 189512 DEBUG nova.virt.libvirt.host [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.949 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.950 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.951 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.951 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.952 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.952 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.953 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.953 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.953 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.954 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.954 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.955 189512 DEBUG nova.virt.hardware [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.961 189512 DEBUG nova.virt.libvirt.vif [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-871685025',display_name='tempest-AttachInterfacesUnderV243Test-server-871685025',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-871685025',id=7,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUwdv+NY00dZ4Qak5VAhJonHJDg3QW/4qrZXWUPft55hAyY+K9JJ/IZy3JiB2DL4dT9YRZ4HS2lUokEK1+MWo4Kffjap+PoFdLJkWZvU88eiaYZMJygvq2Y3gk5LCAb/A==',key_name='tempest-keypair-1770308231',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5dde91941cac4081b671670d9a874621',ramdisk_id='',reservation_id='r-pp070lnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1494013272',owner_user_name='tempest-AttachInterfacesUnderV243Test-1494013272-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9177a32b390447b1acbb338fbf90b4bc',uuid=691446f5-d3d8-4a4f-a161-f2058a04a59d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.961 189512 DEBUG nova.network.os_vif_util [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Converting VIF {"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.962 189512 DEBUG nova.network.os_vif_util [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:0a:ea,bridge_name='br-int',has_traffic_filtering=True,id=2c9e194a-9ee9-406f-8afb-aba53adbc9d7,network=Network(51d90832-bbf5-4d6e-98bd-38064caad349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9e194a-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.963 189512 DEBUG nova.objects.instance [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lazy-loading 'pci_devices' on Instance uuid 691446f5-d3d8-4a4f-a161-f2058a04a59d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.977 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <uuid>691446f5-d3d8-4a4f-a161-f2058a04a59d</uuid>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <name>instance-00000007</name>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-871685025</nova:name>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:56:17</nova:creationTime>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:user uuid="9177a32b390447b1acbb338fbf90b4bc">tempest-AttachInterfacesUnderV243Test-1494013272-project-member</nova:user>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:project uuid="5dde91941cac4081b671670d9a874621">tempest-AttachInterfacesUnderV243Test-1494013272</nova:project>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        <nova:port uuid="2c9e194a-9ee9-406f-8afb-aba53adbc9d7">
Dec  1 22:56:17 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <system>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <entry name="serial">691446f5-d3d8-4a4f-a161-f2058a04a59d</entry>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <entry name="uuid">691446f5-d3d8-4a4f-a161-f2058a04a59d</entry>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </system>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <os>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  </os>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <features>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  </features>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.config"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:ad:0a:ea"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <target dev="tap2c9e194a-9e"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/console.log" append="off"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <video>
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </video>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:56:17 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:56:17 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:56:17 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:56:17 compute-0 nova_compute[189508]: </domain>
Dec  1 22:56:17 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.979 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Preparing to wait for external event network-vif-plugged-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.980 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.980 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.981 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.982 189512 DEBUG nova.virt.libvirt.vif [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-871685025',display_name='tempest-AttachInterfacesUnderV243Test-server-871685025',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-871685025',id=7,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUwdv+NY00dZ4Qak5VAhJonHJDg3QW/4qrZXWUPft55hAyY+K9JJ/IZy3JiB2DL4dT9YRZ4HS2lUokEK1+MWo4Kffjap+PoFdLJkWZvU88eiaYZMJygvq2Y3gk5LCAb/A==',key_name='tempest-keypair-1770308231',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5dde91941cac4081b671670d9a874621',ramdisk_id='',reservation_id='r-pp070lnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1494013272',owner_user_name='tempest-AttachInterfacesUnderV243Test-1494013272-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9177a32b390447b1acbb338fbf90b4bc',uuid=691446f5-d3d8-4a4f-a161-f2058a04a59d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.983 189512 DEBUG nova.network.os_vif_util [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Converting VIF {"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.984 189512 DEBUG nova.network.os_vif_util [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:0a:ea,bridge_name='br-int',has_traffic_filtering=True,id=2c9e194a-9ee9-406f-8afb-aba53adbc9d7,network=Network(51d90832-bbf5-4d6e-98bd-38064caad349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9e194a-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.985 189512 DEBUG os_vif [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:0a:ea,bridge_name='br-int',has_traffic_filtering=True,id=2c9e194a-9ee9-406f-8afb-aba53adbc9d7,network=Network(51d90832-bbf5-4d6e-98bd-38064caad349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9e194a-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.986 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.987 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.988 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.994 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.994 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c9e194a-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:17 compute-0 nova_compute[189508]: 2025-12-01 22:56:17.995 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c9e194a-9e, col_values=(('external_ids', {'iface-id': '2c9e194a-9ee9-406f-8afb-aba53adbc9d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:0a:ea', 'vm-uuid': '691446f5-d3d8-4a4f-a161-f2058a04a59d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:18 compute-0 NetworkManager[56278]: <info>  [1764629777.9990] manager: (tap2c9e194a-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.002 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.003 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.004 189512 DEBUG nova.virt.disk.api [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Cannot resize image /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.005 189512 DEBUG nova.objects.instance [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lazy-loading 'migration_context' on Instance uuid 43481db0-816b-4096-a511-f46b9a3656d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.009 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.011 189512 INFO os_vif [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:0a:ea,bridge_name='br-int',has_traffic_filtering=True,id=2c9e194a-9ee9-406f-8afb-aba53adbc9d7,network=Network(51d90832-bbf5-4d6e-98bd-38064caad349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9e194a-9e')#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.021 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.021 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Ensure instance console log exists: /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.022 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.022 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.022 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.081 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.082 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.085 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] No VIF found with MAC fa:16:3e:ad:0a:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.086 189512 INFO nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Using config drive#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.241 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.627 189512 INFO nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Creating config drive at /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.config#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.641 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj1e2t_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.780 189512 DEBUG oslo_concurrency.processutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyj1e2t_s" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.805 189512 DEBUG nova.compute.manager [req-c7dbe008-ff51-4b75-8043-f07ea849bbac req-dca55307-7774-4d43-8ce1-ffb9e12a8612 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received event network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.806 189512 DEBUG oslo_concurrency.lockutils [req-c7dbe008-ff51-4b75-8043-f07ea849bbac req-dca55307-7774-4d43-8ce1-ffb9e12a8612 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.807 189512 DEBUG oslo_concurrency.lockutils [req-c7dbe008-ff51-4b75-8043-f07ea849bbac req-dca55307-7774-4d43-8ce1-ffb9e12a8612 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.807 189512 DEBUG oslo_concurrency.lockutils [req-c7dbe008-ff51-4b75-8043-f07ea849bbac req-dca55307-7774-4d43-8ce1-ffb9e12a8612 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.808 189512 DEBUG nova.compute.manager [req-c7dbe008-ff51-4b75-8043-f07ea849bbac req-dca55307-7774-4d43-8ce1-ffb9e12a8612 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] No waiting events found dispatching network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.808 189512 WARNING nova.compute.manager [req-c7dbe008-ff51-4b75-8043-f07ea849bbac req-dca55307-7774-4d43-8ce1-ffb9e12a8612 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received unexpected event network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.874 189512 DEBUG nova.network.neutron [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Successfully created port: 1110de1e-b008-47e8-9369-232fb9ff016e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:56:18 compute-0 kernel: tap2c9e194a-9e: entered promiscuous mode
Dec  1 22:56:18 compute-0 NetworkManager[56278]: <info>  [1764629778.8811] manager: (tap2c9e194a-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Dec  1 22:56:18 compute-0 ovn_controller[97770]: 2025-12-01T22:56:18Z|00071|binding|INFO|Claiming lport 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 for this chassis.
Dec  1 22:56:18 compute-0 ovn_controller[97770]: 2025-12-01T22:56:18Z|00072|binding|INFO|2c9e194a-9ee9-406f-8afb-aba53adbc9d7: Claiming fa:16:3e:ad:0a:ea 10.100.0.11
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.890 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.898 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:0a:ea 10.100.0.11'], port_security=['fa:16:3e:ad:0a:ea 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '691446f5-d3d8-4a4f-a161-f2058a04a59d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d90832-bbf5-4d6e-98bd-38064caad349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5dde91941cac4081b671670d9a874621', 'neutron:revision_number': '2', 'neutron:security_group_ids': '544b5cb0-fe7d-410d-9d36-89c1d5ce3010', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca374238-9b29-4fbb-8971-048cd0a5e9c0, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=2c9e194a-9ee9-406f-8afb-aba53adbc9d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.900 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 in datapath 51d90832-bbf5-4d6e-98bd-38064caad349 bound to our chassis#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.903 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51d90832-bbf5-4d6e-98bd-38064caad349#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.915 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.921 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:18 compute-0 ovn_controller[97770]: 2025-12-01T22:56:18Z|00073|binding|INFO|Setting lport 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 ovn-installed in OVS
Dec  1 22:56:18 compute-0 ovn_controller[97770]: 2025-12-01T22:56:18Z|00074|binding|INFO|Setting lport 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 up in Southbound
Dec  1 22:56:18 compute-0 nova_compute[189508]: 2025-12-01 22:56:18.929 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.933 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0e295159-86ed-493e-a148-fb79dd92e924]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.934 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51d90832-b1 in ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.936 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51d90832-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.936 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbc29f1-4f47-4b8f-96fd-df1910355722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:18 compute-0 systemd-udevd[251339]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.942 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[9efdecb9-8842-4f1e-8522-51bf9ac37ea7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:18 compute-0 systemd-machined[155759]: New machine qemu-7-instance-00000007.
Dec  1 22:56:18 compute-0 NetworkManager[56278]: <info>  [1764629778.9558] device (tap2c9e194a-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:56:18 compute-0 NetworkManager[56278]: <info>  [1764629778.9598] device (tap2c9e194a-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:56:18 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.964 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[cf11718a-fcac-4507-9c42-8b3bf8eeef04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:18.995 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[135deade-557c-4498-9c1c-a72af2b31375]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.046 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc77e72-451e-47a2-b56b-a7a432986e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 systemd-udevd[251343]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:56:19 compute-0 NetworkManager[56278]: <info>  [1764629779.0622] manager: (tap51d90832-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.061 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[222dd6c6-f7ed-4eb4-af99-b8c4099776db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.113 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd7d8ce-4577-4680-b992-090e47e2ac74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.118 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[69cfe1c7-dcd5-4332-a076-4bc541e2f8ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 NetworkManager[56278]: <info>  [1764629779.1553] device (tap51d90832-b0): carrier: link connected
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.163 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[da68107b-3556-4d49-bb2b-431b062ba629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.191 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[fc48385b-5986-4844-9d5e-085f8afe5449]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d90832-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:db:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528872, 'reachable_time': 27097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251372, 'error': None, 'target': 'ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.222 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc4688e-21c8-49ef-af34-2ab7b7c4f141]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:db0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528872, 'tstamp': 528872}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251377, 'error': None, 'target': 'ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.244 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c555bc-19d1-410e-8547-399d7001ab8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d90832-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:db:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528872, 'reachable_time': 27097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251380, 'error': None, 'target': 'ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.311 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb4b644-ac7f-4490-8f96-d540e75178ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.359 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629779.3591943, 691446f5-d3d8-4a4f-a161-f2058a04a59d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.361 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] VM Started (Lifecycle Event)#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.392 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.400 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629779.3594444, 691446f5-d3d8-4a4f-a161-f2058a04a59d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.401 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.424 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.426 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[563754dd-d4e1-4885-83da-d3e44b4abd38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.428 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d90832-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.430 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.431 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51d90832-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.434 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 kernel: tap51d90832-b0: entered promiscuous mode
Dec  1 22:56:19 compute-0 NetworkManager[56278]: <info>  [1764629779.4405] manager: (tap51d90832-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.444 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.447 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51d90832-b0, col_values=(('external_ids', {'iface-id': '0bac805e-79cd-4ef5-a08c-830fa9d99912'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.449 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:56:19 compute-0 ovn_controller[97770]: 2025-12-01T22:56:19Z|00075|binding|INFO|Releasing lport 0bac805e-79cd-4ef5-a08c-830fa9d99912 from this chassis (sb_readonly=0)
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.451 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.454 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51d90832-bbf5-4d6e-98bd-38064caad349.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51d90832-bbf5-4d6e-98bd-38064caad349.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.456 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e764b4-47c5-4a77-9d7d-ebf4cbd18c59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.457 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-51d90832-bbf5-4d6e-98bd-38064caad349
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/51d90832-bbf5-4d6e-98bd-38064caad349.pid.haproxy
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID 51d90832-bbf5-4d6e-98bd-38064caad349
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.458 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349', 'env', 'PROCESS_TAG=haproxy-51d90832-bbf5-4d6e-98bd-38064caad349', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51d90832-bbf5-4d6e-98bd-38064caad349.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.482 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.486 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.488 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.489 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.489 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.489 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.490 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.491 189512 INFO nova.compute.manager [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Terminating instance#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.492 189512 DEBUG nova.compute.manager [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:56:19 compute-0 kernel: tapc3cfec72-c8 (unregistering): left promiscuous mode
Dec  1 22:56:19 compute-0 NetworkManager[56278]: <info>  [1764629779.5192] device (tapc3cfec72-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.534 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 ovn_controller[97770]: 2025-12-01T22:56:19Z|00076|binding|INFO|Releasing lport c3cfec72-c837-4139-9b78-a9e2dea166e8 from this chassis (sb_readonly=0)
Dec  1 22:56:19 compute-0 ovn_controller[97770]: 2025-12-01T22:56:19Z|00077|binding|INFO|Setting lport c3cfec72-c837-4139-9b78-a9e2dea166e8 down in Southbound
Dec  1 22:56:19 compute-0 ovn_controller[97770]: 2025-12-01T22:56:19Z|00078|binding|INFO|Removing iface tapc3cfec72-c8 ovn-installed in OVS
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.540 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.556 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:19.559 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:8e:24 10.100.0.8'], port_security=['fa:16:3e:66:8e:24 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2573f610-2d06-4add-a22c-f90f61f3a95a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5188137218bd444b9e92a1299207f297', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fee585f9-2f59-4dfc-a390-e2fe7beb50b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78fd76c2-4096-4114-82fa-20be870e0268, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=c3cfec72-c837-4139-9b78-a9e2dea166e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:56:19 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  1 22:56:19 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 3.781s CPU time.
Dec  1 22:56:19 compute-0 systemd-machined[155759]: Machine qemu-6-instance-00000006 terminated.
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.789 189512 INFO nova.virt.libvirt.driver [-] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Instance destroyed successfully.#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.790 189512 DEBUG nova.objects.instance [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lazy-loading 'resources' on Instance uuid 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.809 189512 DEBUG nova.virt.libvirt.vif [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:56:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1030745659',display_name='tempest-ServerAddressesTestJSON-server-1030745659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1030745659',id=6,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:56:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5188137218bd444b9e92a1299207f297',ramdisk_id='',reservation_id='r-m2niz0rp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-529319613',owner_user_name='tempest-ServerAddressesTestJSON-529319613-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:56:16Z,user_data=None,user_id='2d96ce1170a34f538a6b777063374e7d',uuid=86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.810 189512 DEBUG nova.network.os_vif_util [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Converting VIF {"id": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "address": "fa:16:3e:66:8e:24", "network": {"id": "2573f610-2d06-4add-a22c-f90f61f3a95a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1533435019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5188137218bd444b9e92a1299207f297", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3cfec72-c8", "ovs_interfaceid": "c3cfec72-c837-4139-9b78-a9e2dea166e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.811 189512 DEBUG nova.network.os_vif_util [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:8e:24,bridge_name='br-int',has_traffic_filtering=True,id=c3cfec72-c837-4139-9b78-a9e2dea166e8,network=Network(2573f610-2d06-4add-a22c-f90f61f3a95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3cfec72-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.812 189512 DEBUG os_vif [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:8e:24,bridge_name='br-int',has_traffic_filtering=True,id=c3cfec72-c837-4139-9b78-a9e2dea166e8,network=Network(2573f610-2d06-4add-a22c-f90f61f3a95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3cfec72-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.815 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.815 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3cfec72-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.819 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.823 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.826 189512 INFO os_vif [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:8e:24,bridge_name='br-int',has_traffic_filtering=True,id=c3cfec72-c837-4139-9b78-a9e2dea166e8,network=Network(2573f610-2d06-4add-a22c-f90f61f3a95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3cfec72-c8')#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.827 189512 INFO nova.virt.libvirt.driver [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Deleting instance files /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd_del#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.829 189512 INFO nova.virt.libvirt.driver [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Deletion of /var/lib/nova/instances/86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd_del complete#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.906 189512 INFO nova.compute.manager [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.907 189512 DEBUG oslo.service.loopingcall [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.907 189512 DEBUG nova.compute.manager [-] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:56:19 compute-0 nova_compute[189508]: 2025-12-01 22:56:19.907 189512 DEBUG nova.network.neutron [-] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:56:19 compute-0 podman[251432]: 2025-12-01 22:56:19.988756372 +0000 UTC m=+0.095547420 container create b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:56:20 compute-0 podman[251432]: 2025-12-01 22:56:19.939227568 +0000 UTC m=+0.046018646 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:56:20 compute-0 systemd[1]: Started libpod-conmon-b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8.scope.
Dec  1 22:56:20 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42911f31bf2d8136a681c844dcf48bdb9d5c184beba69980316e149b96b55c7a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:56:20 compute-0 ovn_controller[97770]: 2025-12-01T22:56:20Z|00079|binding|INFO|Releasing lport 0bac805e-79cd-4ef5-a08c-830fa9d99912 from this chassis (sb_readonly=0)
Dec  1 22:56:20 compute-0 ovn_controller[97770]: 2025-12-01T22:56:20Z|00080|binding|INFO|Releasing lport cb337fb3-12ea-44e8-97d8-0cb3546f35a6 from this chassis (sb_readonly=0)
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.109 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:20 compute-0 podman[251432]: 2025-12-01 22:56:20.121764114 +0000 UTC m=+0.228555192 container init b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:56:20 compute-0 podman[251432]: 2025-12-01 22:56:20.138514509 +0000 UTC m=+0.245305557 container start b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [NOTICE]   (251449) : New worker (251451) forked
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [NOTICE]   (251449) : Loading success.
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.236 106662 INFO neutron.agent.ovn.metadata.agent [-] Port c3cfec72-c837-4139-9b78-a9e2dea166e8 in datapath 2573f610-2d06-4add-a22c-f90f61f3a95a unbound from our chassis#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.240 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2573f610-2d06-4add-a22c-f90f61f3a95a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.242 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e23070e9-424a-44a7-a2cd-bf1b901307dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.243 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a namespace which is not needed anymore#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.246 189512 DEBUG nova.network.neutron [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updated VIF entry in instance network info cache for port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.246 189512 DEBUG nova.network.neutron [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.272 189512 DEBUG oslo_concurrency.lockutils [req-2813143c-3c5c-4005-ad8e-09d9e2dff12f req-7e9beaa8-3cdc-422c-b08e-e4bd648b15f3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:20 compute-0 ovn_controller[97770]: 2025-12-01T22:56:20Z|00081|binding|INFO|Releasing lport 0bac805e-79cd-4ef5-a08c-830fa9d99912 from this chassis (sb_readonly=0)
Dec  1 22:56:20 compute-0 ovn_controller[97770]: 2025-12-01T22:56:20Z|00082|binding|INFO|Releasing lport cb337fb3-12ea-44e8-97d8-0cb3546f35a6 from this chassis (sb_readonly=0)
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.347 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [NOTICE]   (251259) : haproxy version is 2.8.14-c23fe91
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [NOTICE]   (251259) : path to executable is /usr/sbin/haproxy
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [WARNING]  (251259) : Exiting Master process...
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [WARNING]  (251259) : Exiting Master process...
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [ALERT]    (251259) : Current worker (251261) exited with code 143 (Terminated)
Dec  1 22:56:20 compute-0 neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a[251255]: [WARNING]  (251259) : All workers exited. Exiting... (0)
Dec  1 22:56:20 compute-0 systemd[1]: libpod-b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0.scope: Deactivated successfully.
Dec  1 22:56:20 compute-0 podman[251477]: 2025-12-01 22:56:20.445142773 +0000 UTC m=+0.056019739 container died b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.476 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0-userdata-shm.mount: Deactivated successfully.
Dec  1 22:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e1795b7c225f302aab885d84e41f01e79f7412765c29a09ed10840adde455ff-merged.mount: Deactivated successfully.
Dec  1 22:56:20 compute-0 podman[251477]: 2025-12-01 22:56:20.500606966 +0000 UTC m=+0.111483932 container cleanup b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 22:56:20 compute-0 systemd[1]: libpod-conmon-b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0.scope: Deactivated successfully.
Dec  1 22:56:20 compute-0 podman[251507]: 2025-12-01 22:56:20.57445888 +0000 UTC m=+0.048219778 container remove b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.584 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c788f121-f9ed-491f-b407-94a69de9d518]: (4, ('Mon Dec  1 10:56:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a (b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0)\nb84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0\nMon Dec  1 10:56:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a (b84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0)\nb84dd6da3b15e56ece4a939118e5c170d612ec917eead4072ed1ba3a83fb8fb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.586 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2a4ca5-0494-4441-bd3a-2b74c246e7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.587 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2573f610-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.590 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:20 compute-0 kernel: tap2573f610-20: left promiscuous mode
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.592 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.595 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc5ab09-2dad-4695-82d6-4b821cf0fbf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.610 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.618 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdc0e68-7461-4ecf-8ef7-110afbd6dd39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.619 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[52f477cf-1297-450e-b29b-bd81e4816d8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.637 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a3479691-8530-452c-9978-79a5b042f89c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528174, 'reachable_time': 26218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251521, 'error': None, 'target': 'ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d2573f610\x2d2d06\x2d4add\x2da22c\x2df90f61f3a95a.mount: Deactivated successfully.
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.640 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2573f610-2d06-4add-a22c-f90f61f3a95a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:56:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:20.641 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[73ed9453-d217-4353-a45c-85eeb7c2788f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.834 189512 DEBUG nova.network.neutron [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Successfully updated port: 1110de1e-b008-47e8-9369-232fb9ff016e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.849 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.850 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquired lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.850 189512 DEBUG nova.network.neutron [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.907 189512 DEBUG nova.compute.manager [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received event network-vif-unplugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.908 189512 DEBUG oslo_concurrency.lockutils [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.908 189512 DEBUG oslo_concurrency.lockutils [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.908 189512 DEBUG oslo_concurrency.lockutils [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.908 189512 DEBUG nova.compute.manager [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] No waiting events found dispatching network-vif-unplugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.909 189512 DEBUG nova.compute.manager [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received event network-vif-unplugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.909 189512 DEBUG nova.compute.manager [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received event network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.909 189512 DEBUG oslo_concurrency.lockutils [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.909 189512 DEBUG oslo_concurrency.lockutils [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.909 189512 DEBUG oslo_concurrency.lockutils [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.909 189512 DEBUG nova.compute.manager [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] No waiting events found dispatching network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:56:20 compute-0 nova_compute[189508]: 2025-12-01 22:56:20.910 189512 WARNING nova.compute.manager [req-1894f5ae-a318-49e6-b52c-4422ce4f8c3e req-57d03ecb-dcba-43f8-a581-e29c8c24670d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received unexpected event network-vif-plugged-c3cfec72-c837-4139-9b78-a9e2dea166e8 for instance with vm_state active and task_state deleting.#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.052 189512 DEBUG nova.network.neutron [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.386 189512 DEBUG nova.network.neutron [-] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.410 189512 INFO nova.compute.manager [-] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Took 1.50 seconds to deallocate network for instance.#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.472 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.472 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.563 189512 DEBUG nova.compute.provider_tree [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.576 189512 DEBUG nova.scheduler.client.report [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.607 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.664 189512 INFO nova.scheduler.client.report [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Deleted allocations for instance 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.723 189512 DEBUG nova.compute.manager [req-06dee441-7239-4273-947e-6e979493c8bb req-aff862dc-eec8-431a-9ce9-2926acabc171 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Received event network-vif-deleted-c3cfec72-c837-4139-9b78-a9e2dea166e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:21 compute-0 nova_compute[189508]: 2025-12-01 22:56:21.745 189512 DEBUG oslo_concurrency.lockutils [None req-e79c6d14-999b-4f7e-9976-98d64d66fea2 2d96ce1170a34f538a6b777063374e7d 5188137218bd444b9e92a1299207f297 - - default default] Lock "86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.155 189512 DEBUG nova.network.neutron [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Updating instance_info_cache with network_info: [{"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.195 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Releasing lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.195 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Instance network_info: |[{"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.197 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Start _get_guest_xml network_info=[{"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.206 189512 WARNING nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.211 189512 DEBUG nova.virt.libvirt.host [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.212 189512 DEBUG nova.virt.libvirt.host [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.219 189512 DEBUG nova.virt.libvirt.host [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.220 189512 DEBUG nova.virt.libvirt.host [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.220 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.221 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.222 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.222 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.223 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.223 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.223 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.224 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.224 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.225 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.225 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.226 189512 DEBUG nova.virt.hardware [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.233 189512 DEBUG nova.virt.libvirt.vif [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-418498432',display_name='tempest-ServersTestJSON-server-418498432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-418498432',id=8,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9P9QfiFlABUbJCxtNsA3nKG9t/u23F/v0ft5XMrq92TJJgEwvo4o7JwrV4wU4r8VjtRsHt4jaGWcl4QFWwrZ6+mmbTHjgVjqXOKHdUWpNoVxNkOt1/VLM7S4hCFaIy1g==',key_name='tempest-keypair-339101359',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3434d463800f4b268c2f67e9278a65ec',ramdisk_id='',reservation_id='r-g0vj2ge7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-67549223',owner_user_name='tempest-ServersTestJSON-67549223-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='964f63f357b7496c959106655fdc82c3',uuid=43481db0-816b-4096-a511-f46b9a3656d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.234 189512 DEBUG nova.network.os_vif_util [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Converting VIF {"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.235 189512 DEBUG nova.network.os_vif_util [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:b0:fe,bridge_name='br-int',has_traffic_filtering=True,id=1110de1e-b008-47e8-9369-232fb9ff016e,network=Network(aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1110de1e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.238 189512 DEBUG nova.objects.instance [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 43481db0-816b-4096-a511-f46b9a3656d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.257 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <uuid>43481db0-816b-4096-a511-f46b9a3656d5</uuid>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <name>instance-00000008</name>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <nova:name>tempest-ServersTestJSON-server-418498432</nova:name>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:56:22</nova:creationTime>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:user uuid="964f63f357b7496c959106655fdc82c3">tempest-ServersTestJSON-67549223-project-member</nova:user>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:project uuid="3434d463800f4b268c2f67e9278a65ec">tempest-ServersTestJSON-67549223</nova:project>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        <nova:port uuid="1110de1e-b008-47e8-9369-232fb9ff016e">
Dec  1 22:56:22 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <system>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <entry name="serial">43481db0-816b-4096-a511-f46b9a3656d5</entry>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <entry name="uuid">43481db0-816b-4096-a511-f46b9a3656d5</entry>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </system>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <os>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  </os>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <features>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  </features>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk.config"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:42:b0:fe"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <target dev="tap1110de1e-b0"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/console.log" append="off"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <video>
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </video>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:56:22 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:56:22 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:56:22 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:56:22 compute-0 nova_compute[189508]: </domain>
Dec  1 22:56:22 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.258 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Preparing to wait for external event network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.258 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.259 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.259 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.261 189512 DEBUG nova.virt.libvirt.vif [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-418498432',display_name='tempest-ServersTestJSON-server-418498432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-418498432',id=8,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9P9QfiFlABUbJCxtNsA3nKG9t/u23F/v0ft5XMrq92TJJgEwvo4o7JwrV4wU4r8VjtRsHt4jaGWcl4QFWwrZ6+mmbTHjgVjqXOKHdUWpNoVxNkOt1/VLM7S4hCFaIy1g==',key_name='tempest-keypair-339101359',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3434d463800f4b268c2f67e9278a65ec',ramdisk_id='',reservation_id='r-g0vj2ge7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-67549223',owner_user_name='tempest-ServersTestJSON-67549223-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='964f63f357b7496c959106655fdc82c3',uuid=43481db0-816b-4096-a511-f46b9a3656d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.261 189512 DEBUG nova.network.os_vif_util [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Converting VIF {"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.262 189512 DEBUG nova.network.os_vif_util [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:b0:fe,bridge_name='br-int',has_traffic_filtering=True,id=1110de1e-b008-47e8-9369-232fb9ff016e,network=Network(aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1110de1e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.263 189512 DEBUG os_vif [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:b0:fe,bridge_name='br-int',has_traffic_filtering=True,id=1110de1e-b008-47e8-9369-232fb9ff016e,network=Network(aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1110de1e-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.264 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.264 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.265 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.271 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.272 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1110de1e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.273 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1110de1e-b0, col_values=(('external_ids', {'iface-id': '1110de1e-b008-47e8-9369-232fb9ff016e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:b0:fe', 'vm-uuid': '43481db0-816b-4096-a511-f46b9a3656d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:22 compute-0 NetworkManager[56278]: <info>  [1764629782.2780] manager: (tap1110de1e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.278 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.291 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.292 189512 INFO os_vif [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:b0:fe,bridge_name='br-int',has_traffic_filtering=True,id=1110de1e-b008-47e8-9369-232fb9ff016e,network=Network(aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1110de1e-b0')#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.367 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.368 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.368 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] No VIF found with MAC fa:16:3e:42:b0:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:56:22 compute-0 nova_compute[189508]: 2025-12-01 22:56:22.369 189512 INFO nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Using config drive#033[00m
Dec  1 22:56:23 compute-0 nova_compute[189508]: 2025-12-01 22:56:23.076 189512 DEBUG nova.compute.manager [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-changed-1110de1e-b008-47e8-9369-232fb9ff016e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:23 compute-0 nova_compute[189508]: 2025-12-01 22:56:23.077 189512 DEBUG nova.compute.manager [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Refreshing instance network info cache due to event network-changed-1110de1e-b008-47e8-9369-232fb9ff016e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:56:23 compute-0 nova_compute[189508]: 2025-12-01 22:56:23.077 189512 DEBUG oslo_concurrency.lockutils [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:23 compute-0 nova_compute[189508]: 2025-12-01 22:56:23.077 189512 DEBUG oslo_concurrency.lockutils [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:23 compute-0 nova_compute[189508]: 2025-12-01 22:56:23.078 189512 DEBUG nova.network.neutron [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Refreshing network info cache for port 1110de1e-b008-47e8-9369-232fb9ff016e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:56:23 compute-0 nova_compute[189508]: 2025-12-01 22:56:23.929 189512 INFO nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Creating config drive at /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk.config#033[00m
Dec  1 22:56:23 compute-0 nova_compute[189508]: 2025-12-01 22:56:23.935 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyqnzu0sd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.092 189512 DEBUG oslo_concurrency.processutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyqnzu0sd" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:24 compute-0 NetworkManager[56278]: <info>  [1764629784.1892] manager: (tap1110de1e-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Dec  1 22:56:24 compute-0 kernel: tap1110de1e-b0: entered promiscuous mode
Dec  1 22:56:24 compute-0 ovn_controller[97770]: 2025-12-01T22:56:24Z|00083|binding|INFO|Claiming lport 1110de1e-b008-47e8-9369-232fb9ff016e for this chassis.
Dec  1 22:56:24 compute-0 ovn_controller[97770]: 2025-12-01T22:56:24Z|00084|binding|INFO|1110de1e-b008-47e8-9369-232fb9ff016e: Claiming fa:16:3e:42:b0:fe 10.100.0.13
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.198 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.210 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:b0:fe 10.100.0.13'], port_security=['fa:16:3e:42:b0:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '43481db0-816b-4096-a511-f46b9a3656d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3434d463800f4b268c2f67e9278a65ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13eda314-ebb1-4d0d-a547-715edfa9ba33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c1c9c00-28a9-4b27-bfde-47f8dad59a71, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=1110de1e-b008-47e8-9369-232fb9ff016e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.212 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 1110de1e-b008-47e8-9369-232fb9ff016e in datapath aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61 bound to our chassis#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.213 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.230 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ea73bb71-bf19-488e-9e41-e3096bfb42d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.232 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa9d98c6-f1 in ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.234 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa9d98c6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.234 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[10fbd16f-8cf6-48d5-bfcf-46e890d2de01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.235 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[35bb9eba-6f4b-44ee-bab2-b6cffc2030c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 systemd-udevd[251542]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.259 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[185cd18f-ce91-496d-9dba-c29acc990cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 NetworkManager[56278]: <info>  [1764629784.2698] device (tap1110de1e-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:56:24 compute-0 NetworkManager[56278]: <info>  [1764629784.2714] device (tap1110de1e-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:56:24 compute-0 systemd-machined[155759]: New machine qemu-8-instance-00000008.
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.283 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:24 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec  1 22:56:24 compute-0 ovn_controller[97770]: 2025-12-01T22:56:24Z|00085|binding|INFO|Setting lport 1110de1e-b008-47e8-9369-232fb9ff016e ovn-installed in OVS
Dec  1 22:56:24 compute-0 ovn_controller[97770]: 2025-12-01T22:56:24Z|00086|binding|INFO|Setting lport 1110de1e-b008-47e8-9369-232fb9ff016e up in Southbound
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.297 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[41ec175e-6f2d-4d30-abc6-83ee420c0e8d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.300 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.338 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[192fc1e1-5a18-41e0-930d-81193fdd5f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.353 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a323df3a-4931-4b04-b589-81f0cbc8615f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 NetworkManager[56278]: <info>  [1764629784.3546] manager: (tapaa9d98c6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Dec  1 22:56:24 compute-0 systemd-udevd[251548]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.397 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[429d4f31-429a-49b2-b10c-055bdf0ee51e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.401 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[b531536f-ac54-44eb-9c90-584d363cb04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 NetworkManager[56278]: <info>  [1764629784.4307] device (tapaa9d98c6-f0): carrier: link connected
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.442 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[cc35b8cd-d384-4d3f-8ffb-e78e21fe4ab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.464 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[371e6aaf-27f7-4ea8-ba05-aed2e538b49c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9d98c6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:8d:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529399, 'reachable_time': 35651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251578, 'error': None, 'target': 'ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.480 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[87138405-eb01-4c83-babf-3963fb19de38]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:8d21'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529399, 'tstamp': 529399}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251579, 'error': None, 'target': 'ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.500 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[26026f4e-e26f-4067-baf9-0d5bd61da294]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9d98c6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:8d:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529399, 'reachable_time': 35651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251580, 'error': None, 'target': 'ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.533 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e68d9b18-e768-49f8-9ce1-5d14e2f60015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.620 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d063457c-c918-49ec-abc1-bda454c373dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.623 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9d98c6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.623 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.624 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9d98c6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:24 compute-0 NetworkManager[56278]: <info>  [1764629784.6288] manager: (tapaa9d98c6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec  1 22:56:24 compute-0 kernel: tapaa9d98c6-f0: entered promiscuous mode
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.627 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.633 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa9d98c6-f0, col_values=(('external_ids', {'iface-id': '119998af-5b5b-4819-9932-35933a81ae58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:24 compute-0 ovn_controller[97770]: 2025-12-01T22:56:24Z|00087|binding|INFO|Releasing lport 119998af-5b5b-4819-9932-35933a81ae58 from this chassis (sb_readonly=0)
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.636 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.656 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.658 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.659 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b188565a-4e52-4414-9d78-75fe081aa6e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.661 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61.pid.haproxy
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:56:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:24.663 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'env', 'PROCESS_TAG=haproxy-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.687 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629784.6871037, 43481db0-816b-4096-a511-f46b9a3656d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.688 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] VM Started (Lifecycle Event)#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.710 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.722 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629784.6872454, 43481db0-816b-4096-a511-f46b9a3656d5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.723 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.745 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.751 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:56:24 compute-0 nova_compute[189508]: 2025-12-01 22:56:24.768 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:56:25 compute-0 podman[251617]: 2025-12-01 22:56:25.151555832 +0000 UTC m=+0.105599356 container create 23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 22:56:25 compute-0 podman[251617]: 2025-12-01 22:56:25.101976506 +0000 UTC m=+0.056020120 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:56:25 compute-0 systemd[1]: Started libpod-conmon-23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8.scope.
Dec  1 22:56:25 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81382dc32ead8164bceb82620dc2102fe461242c69cd92ea3b5458c84e79641c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:56:25 compute-0 podman[251617]: 2025-12-01 22:56:25.25058845 +0000 UTC m=+0.204631974 container init 23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:56:25 compute-0 podman[251617]: 2025-12-01 22:56:25.261475769 +0000 UTC m=+0.215519293 container start 23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 22:56:25 compute-0 neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61[251631]: [NOTICE]   (251635) : New worker (251637) forked
Dec  1 22:56:25 compute-0 neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61[251631]: [NOTICE]   (251635) : Loading success.
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.479 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.485 189512 DEBUG nova.compute.manager [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.485 189512 DEBUG oslo_concurrency.lockutils [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.485 189512 DEBUG oslo_concurrency.lockutils [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.486 189512 DEBUG oslo_concurrency.lockutils [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.486 189512 DEBUG nova.compute.manager [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Processing event network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.486 189512 DEBUG nova.compute.manager [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.486 189512 DEBUG oslo_concurrency.lockutils [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.487 189512 DEBUG oslo_concurrency.lockutils [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.487 189512 DEBUG oslo_concurrency.lockutils [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.487 189512 DEBUG nova.compute.manager [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] No waiting events found dispatching network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.487 189512 WARNING nova.compute.manager [req-ea60065e-599e-410c-b34a-f0b77cbf9ede req-7594c50b-d83f-490f-a4b3-c0d98a0f3cd4 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received unexpected event network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e for instance with vm_state building and task_state spawning.#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.488 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.493 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629785.4931595, 43481db0-816b-4096-a511-f46b9a3656d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.494 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.496 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.502 189512 INFO nova.virt.libvirt.driver [-] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Instance spawned successfully.#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.502 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.550 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.557 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.558 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.558 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.559 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.559 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.560 189512 DEBUG nova.virt.libvirt.driver [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.565 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.616 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.645 189512 INFO nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Took 8.10 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.646 189512 DEBUG nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.716 189512 INFO nova.compute.manager [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Took 8.69 seconds to build instance.#033[00m
Dec  1 22:56:25 compute-0 nova_compute[189508]: 2025-12-01 22:56:25.748 189512 DEBUG oslo_concurrency.lockutils [None req-2bbac5a2-5a35-4270-9a87-07ab285e5be6 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:27 compute-0 nova_compute[189508]: 2025-12-01 22:56:27.169 189512 DEBUG nova.network.neutron [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Updated VIF entry in instance network info cache for port 1110de1e-b008-47e8-9369-232fb9ff016e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:56:27 compute-0 nova_compute[189508]: 2025-12-01 22:56:27.171 189512 DEBUG nova.network.neutron [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Updating instance_info_cache with network_info: [{"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:27 compute-0 nova_compute[189508]: 2025-12-01 22:56:27.194 189512 DEBUG oslo_concurrency.lockutils [req-30e154d9-3eb0-41e6-b0b0-f9f5ab653958 req-a047259d-3bed-4d89-b58c-ac8e902b0018 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:27 compute-0 nova_compute[189508]: 2025-12-01 22:56:27.276 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:27 compute-0 podman[251646]: 2025-12-01 22:56:27.801362578 +0000 UTC m=+0.083239422 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:56:28 compute-0 nova_compute[189508]: 2025-12-01 22:56:28.738 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:28 compute-0 NetworkManager[56278]: <info>  [1764629788.7610] manager: (patch-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  1 22:56:28 compute-0 NetworkManager[56278]: <info>  [1764629788.7631] manager: (patch-br-int-to-provnet-2ca1b2ba-ced0-4d3b-a498-99d4e11f374a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec  1 22:56:28 compute-0 nova_compute[189508]: 2025-12-01 22:56:28.848 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:28 compute-0 ovn_controller[97770]: 2025-12-01T22:56:28Z|00088|binding|INFO|Releasing lport 0bac805e-79cd-4ef5-a08c-830fa9d99912 from this chassis (sb_readonly=0)
Dec  1 22:56:28 compute-0 ovn_controller[97770]: 2025-12-01T22:56:28Z|00089|binding|INFO|Releasing lport 119998af-5b5b-4819-9932-35933a81ae58 from this chassis (sb_readonly=0)
Dec  1 22:56:28 compute-0 nova_compute[189508]: 2025-12-01 22:56:28.859 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:29 compute-0 nova_compute[189508]: 2025-12-01 22:56:29.384 189512 DEBUG nova.compute.manager [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-changed-1110de1e-b008-47e8-9369-232fb9ff016e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:29 compute-0 nova_compute[189508]: 2025-12-01 22:56:29.384 189512 DEBUG nova.compute.manager [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Refreshing instance network info cache due to event network-changed-1110de1e-b008-47e8-9369-232fb9ff016e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:56:29 compute-0 nova_compute[189508]: 2025-12-01 22:56:29.385 189512 DEBUG oslo_concurrency.lockutils [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:29 compute-0 nova_compute[189508]: 2025-12-01 22:56:29.385 189512 DEBUG oslo_concurrency.lockutils [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:29 compute-0 nova_compute[189508]: 2025-12-01 22:56:29.386 189512 DEBUG nova.network.neutron [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Refreshing network info cache for port 1110de1e-b008-47e8-9369-232fb9ff016e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:56:29 compute-0 podman[203693]: time="2025-12-01T22:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:56:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Dec  1 22:56:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5257 "" "Go-http-client/1.1"
Dec  1 22:56:29 compute-0 podman[251673]: 2025-12-01 22:56:29.848270827 +0000 UTC m=+0.132762676 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.483 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.924 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.925 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.926 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.926 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.927 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.928 189512 INFO nova.compute.manager [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Terminating instance#033[00m
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.930 189512 DEBUG nova.compute.manager [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:56:30 compute-0 kernel: tap1110de1e-b0 (unregistering): left promiscuous mode
Dec  1 22:56:30 compute-0 NetworkManager[56278]: <info>  [1764629790.9588] device (tap1110de1e-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.973 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:30 compute-0 ovn_controller[97770]: 2025-12-01T22:56:30Z|00090|binding|INFO|Releasing lport 1110de1e-b008-47e8-9369-232fb9ff016e from this chassis (sb_readonly=0)
Dec  1 22:56:30 compute-0 ovn_controller[97770]: 2025-12-01T22:56:30Z|00091|binding|INFO|Setting lport 1110de1e-b008-47e8-9369-232fb9ff016e down in Southbound
Dec  1 22:56:30 compute-0 ovn_controller[97770]: 2025-12-01T22:56:30Z|00092|binding|INFO|Removing iface tap1110de1e-b0 ovn-installed in OVS
Dec  1 22:56:30 compute-0 nova_compute[189508]: 2025-12-01 22:56:30.994 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:30 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:30.992 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:b0:fe 10.100.0.13'], port_security=['fa:16:3e:42:b0:fe 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '43481db0-816b-4096-a511-f46b9a3656d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3434d463800f4b268c2f67e9278a65ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13eda314-ebb1-4d0d-a547-715edfa9ba33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c1c9c00-28a9-4b27-bfde-47f8dad59a71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=1110de1e-b008-47e8-9369-232fb9ff016e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:56:30 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:30.994 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 1110de1e-b008-47e8-9369-232fb9ff016e in datapath aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61 unbound from our chassis#033[00m
Dec  1 22:56:30 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:30.995 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:30.998 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[da1a1ccc-86a0-4b27-820d-e7aea75debbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:30.999 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61 namespace which is not needed anymore#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.016 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:31 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  1 22:56:31 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 6.161s CPU time.
Dec  1 22:56:31 compute-0 systemd-machined[155759]: Machine qemu-8-instance-00000008 terminated.
Dec  1 22:56:31 compute-0 podman[251693]: 2025-12-01 22:56:31.116726604 +0000 UTC m=+0.133530187 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.215 189512 INFO nova.virt.libvirt.driver [-] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Instance destroyed successfully.#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.216 189512 DEBUG nova.objects.instance [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lazy-loading 'resources' on Instance uuid 43481db0-816b-4096-a511-f46b9a3656d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:31 compute-0 neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61[251631]: [NOTICE]   (251635) : haproxy version is 2.8.14-c23fe91
Dec  1 22:56:31 compute-0 neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61[251631]: [NOTICE]   (251635) : path to executable is /usr/sbin/haproxy
Dec  1 22:56:31 compute-0 neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61[251631]: [WARNING]  (251635) : Exiting Master process...
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.232 189512 DEBUG nova.virt.libvirt.vif [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:56:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-418498432',display_name='tempest-ServersTestJSON-server-418498432',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-418498432',id=8,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9P9QfiFlABUbJCxtNsA3nKG9t/u23F/v0ft5XMrq92TJJgEwvo4o7JwrV4wU4r8VjtRsHt4jaGWcl4QFWwrZ6+mmbTHjgVjqXOKHdUWpNoVxNkOt1/VLM7S4hCFaIy1g==',key_name='tempest-keypair-339101359',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:56:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3434d463800f4b268c2f67e9278a65ec',ramdisk_id='',reservation_id='r-g0vj2ge7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-67549223',owner_user_name='tempest-ServersTestJSON-67549223-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:56:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='964f63f357b7496c959106655fdc82c3',uuid=43481db0-816b-4096-a511-f46b9a3656d5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.233 189512 DEBUG nova.network.os_vif_util [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Converting VIF {"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.234 189512 DEBUG nova.network.os_vif_util [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:b0:fe,bridge_name='br-int',has_traffic_filtering=True,id=1110de1e-b008-47e8-9369-232fb9ff016e,network=Network(aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1110de1e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.234 189512 DEBUG os_vif [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:b0:fe,bridge_name='br-int',has_traffic_filtering=True,id=1110de1e-b008-47e8-9369-232fb9ff016e,network=Network(aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1110de1e-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.236 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:31 compute-0 neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61[251631]: [ALERT]    (251635) : Current worker (251637) exited with code 143 (Terminated)
Dec  1 22:56:31 compute-0 neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61[251631]: [WARNING]  (251635) : All workers exited. Exiting... (0)
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.237 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1110de1e-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.239 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:31 compute-0 systemd[1]: libpod-23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8.scope: Deactivated successfully.
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.243 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:56:31 compute-0 podman[251730]: 2025-12-01 22:56:31.246632658 +0000 UTC m=+0.091033793 container died 23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.250 189512 INFO os_vif [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:b0:fe,bridge_name='br-int',has_traffic_filtering=True,id=1110de1e-b008-47e8-9369-232fb9ff016e,network=Network(aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1110de1e-b0')#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.252 189512 INFO nova.virt.libvirt.driver [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Deleting instance files /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5_del#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.253 189512 INFO nova.virt.libvirt.driver [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Deletion of /var/lib/nova/instances/43481db0-816b-4096-a511-f46b9a3656d5_del complete#033[00m
Dec  1 22:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8-userdata-shm.mount: Deactivated successfully.
Dec  1 22:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-81382dc32ead8164bceb82620dc2102fe461242c69cd92ea3b5458c84e79641c-merged.mount: Deactivated successfully.
Dec  1 22:56:31 compute-0 podman[251730]: 2025-12-01 22:56:31.299766994 +0000 UTC m=+0.144168149 container cleanup 23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 22:56:31 compute-0 systemd[1]: libpod-conmon-23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8.scope: Deactivated successfully.
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.333 189512 INFO nova.compute.manager [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.334 189512 DEBUG oslo.service.loopingcall [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.334 189512 DEBUG nova.compute.manager [-] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.335 189512 DEBUG nova.network.neutron [-] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:56:31 compute-0 openstack_network_exporter[205887]: ERROR   22:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:56:31 compute-0 openstack_network_exporter[205887]: ERROR   22:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:56:31 compute-0 openstack_network_exporter[205887]: ERROR   22:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:56:31 compute-0 openstack_network_exporter[205887]: ERROR   22:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:56:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:56:31 compute-0 openstack_network_exporter[205887]: ERROR   22:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:56:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:56:31 compute-0 podman[251774]: 2025-12-01 22:56:31.43751939 +0000 UTC m=+0.098146174 container remove 23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.448 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[42291c09-f015-4793-b226-f553561730ee]: (4, ('Mon Dec  1 10:56:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61 (23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8)\n23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8\nMon Dec  1 10:56:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61 (23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8)\n23617eb27811c31d8bfc343b7237ee5e5dc6cb98e86cac50b0fe48750b7757f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.452 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[5222f308-41d8-494b-baa1-f0161ea23fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.454 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9d98c6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.458 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:31 compute-0 kernel: tapaa9d98c6-f0: left promiscuous mode
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.460 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.464 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a13576-7716-44f7-a422-668ed1a012b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.479 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.494 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[422de19c-0b94-42dd-a413-fdfbb48c7b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.496 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[551ac284-15b6-4d55-aa21-6cdff956ca99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.516 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[2372a590-703b-4e1f-be53-de4c50df1e10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529390, 'reachable_time': 30317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251787, 'error': None, 'target': 'ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 systemd[1]: run-netns-ovnmeta\x2daa9d98c6\x2dfb90\x2d4fd6\x2d9ee1\x2da94bbe92fb61.mount: Deactivated successfully.
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.523 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:56:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:56:31.523 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6c2e4c-d98f-4bc4-91ee-82b14240e5ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.525 189512 DEBUG nova.compute.manager [req-d62c0b2f-5aa7-491a-b258-2294c896d02a req-71da7420-0ed8-486f-8720-e69a770f59c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-vif-unplugged-1110de1e-b008-47e8-9369-232fb9ff016e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.526 189512 DEBUG oslo_concurrency.lockutils [req-d62c0b2f-5aa7-491a-b258-2294c896d02a req-71da7420-0ed8-486f-8720-e69a770f59c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.528 189512 DEBUG oslo_concurrency.lockutils [req-d62c0b2f-5aa7-491a-b258-2294c896d02a req-71da7420-0ed8-486f-8720-e69a770f59c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.528 189512 DEBUG oslo_concurrency.lockutils [req-d62c0b2f-5aa7-491a-b258-2294c896d02a req-71da7420-0ed8-486f-8720-e69a770f59c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.529 189512 DEBUG nova.compute.manager [req-d62c0b2f-5aa7-491a-b258-2294c896d02a req-71da7420-0ed8-486f-8720-e69a770f59c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] No waiting events found dispatching network-vif-unplugged-1110de1e-b008-47e8-9369-232fb9ff016e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:56:31 compute-0 nova_compute[189508]: 2025-12-01 22:56:31.529 189512 DEBUG nova.compute.manager [req-d62c0b2f-5aa7-491a-b258-2294c896d02a req-71da7420-0ed8-486f-8720-e69a770f59c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-vif-unplugged-1110de1e-b008-47e8-9369-232fb9ff016e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:56:32 compute-0 nova_compute[189508]: 2025-12-01 22:56:32.555 189512 DEBUG nova.network.neutron [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Updated VIF entry in instance network info cache for port 1110de1e-b008-47e8-9369-232fb9ff016e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:56:32 compute-0 nova_compute[189508]: 2025-12-01 22:56:32.556 189512 DEBUG nova.network.neutron [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Updating instance_info_cache with network_info: [{"id": "1110de1e-b008-47e8-9369-232fb9ff016e", "address": "fa:16:3e:42:b0:fe", "network": {"id": "aa9d98c6-fb90-4fd6-9ee1-a94bbe92fb61", "bridge": "br-int", "label": "tempest-ServersTestJSON-531033534-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3434d463800f4b268c2f67e9278a65ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1110de1e-b0", "ovs_interfaceid": "1110de1e-b008-47e8-9369-232fb9ff016e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:32 compute-0 nova_compute[189508]: 2025-12-01 22:56:32.589 189512 DEBUG oslo_concurrency.lockutils [req-6baccdcb-e364-4cca-a2e5-9d71b38bc4ce req-fa3f7fe0-e4df-4255-866b-dfc319616a2a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-43481db0-816b-4096-a511-f46b9a3656d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:33 compute-0 nova_compute[189508]: 2025-12-01 22:56:33.573 189512 DEBUG nova.network.neutron [-] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:33 compute-0 nova_compute[189508]: 2025-12-01 22:56:33.594 189512 INFO nova.compute.manager [-] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Took 2.26 seconds to deallocate network for instance.#033[00m
Dec  1 22:56:33 compute-0 nova_compute[189508]: 2025-12-01 22:56:33.673 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:33 compute-0 nova_compute[189508]: 2025-12-01 22:56:33.674 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:33 compute-0 nova_compute[189508]: 2025-12-01 22:56:33.959 189512 DEBUG nova.compute.provider_tree [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:56:33 compute-0 nova_compute[189508]: 2025-12-01 22:56:33.980 189512 DEBUG nova.scheduler.client.report [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.007 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.021 189512 DEBUG nova.compute.manager [req-5b50280e-7b25-467a-88d1-94f426fafd73 req-b97facf0-69f1-42bb-959b-786aea3bb063 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.021 189512 DEBUG oslo_concurrency.lockutils [req-5b50280e-7b25-467a-88d1-94f426fafd73 req-b97facf0-69f1-42bb-959b-786aea3bb063 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "43481db0-816b-4096-a511-f46b9a3656d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.021 189512 DEBUG oslo_concurrency.lockutils [req-5b50280e-7b25-467a-88d1-94f426fafd73 req-b97facf0-69f1-42bb-959b-786aea3bb063 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.022 189512 DEBUG oslo_concurrency.lockutils [req-5b50280e-7b25-467a-88d1-94f426fafd73 req-b97facf0-69f1-42bb-959b-786aea3bb063 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.022 189512 DEBUG nova.compute.manager [req-5b50280e-7b25-467a-88d1-94f426fafd73 req-b97facf0-69f1-42bb-959b-786aea3bb063 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] No waiting events found dispatching network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.022 189512 WARNING nova.compute.manager [req-5b50280e-7b25-467a-88d1-94f426fafd73 req-b97facf0-69f1-42bb-959b-786aea3bb063 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received unexpected event network-vif-plugged-1110de1e-b008-47e8-9369-232fb9ff016e for instance with vm_state deleted and task_state None.#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.034 189512 INFO nova.scheduler.client.report [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Deleted allocations for instance 43481db0-816b-4096-a511-f46b9a3656d5#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.110 189512 DEBUG nova.compute.manager [req-88165119-9313-48e0-9caa-76c348a14eae req-5696a57b-f405-4b8c-a1cc-31c6d5fa20f8 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Received event network-vif-deleted-1110de1e-b008-47e8-9369-232fb9ff016e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.146 189512 DEBUG oslo_concurrency.lockutils [None req-1bbc48c1-60d5-40ef-852b-6ac1b53ff526 964f63f357b7496c959106655fdc82c3 3434d463800f4b268c2f67e9278a65ec - - default default] Lock "43481db0-816b-4096-a511-f46b9a3656d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.785 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629779.7828014, 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.785 189512 INFO nova.compute.manager [-] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:56:34 compute-0 nova_compute[189508]: 2025-12-01 22:56:34.907 189512 DEBUG nova.compute.manager [None req-be852806-f838-4a56-a1aa-14b84b9743d3 - - - - - -] [instance: 86e9d0e8-9c6e-4a21-82ba-ba202b14c2fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:35 compute-0 nova_compute[189508]: 2025-12-01 22:56:35.488 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.241 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.252 189512 DEBUG nova.compute.manager [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received event network-vif-plugged-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.253 189512 DEBUG oslo_concurrency.lockutils [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.254 189512 DEBUG oslo_concurrency.lockutils [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.255 189512 DEBUG oslo_concurrency.lockutils [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.255 189512 DEBUG nova.compute.manager [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Processing event network-vif-plugged-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.256 189512 DEBUG nova.compute.manager [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received event network-vif-plugged-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.257 189512 DEBUG oslo_concurrency.lockutils [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.258 189512 DEBUG oslo_concurrency.lockutils [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.259 189512 DEBUG oslo_concurrency.lockutils [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.259 189512 DEBUG nova.compute.manager [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] No waiting events found dispatching network-vif-plugged-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.260 189512 WARNING nova.compute.manager [req-79e12437-ed49-4c57-904a-45e91e254cee req-1bce98ca-7cc6-4f5f-9929-176f5da3fde2 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received unexpected event network-vif-plugged-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 for instance with vm_state building and task_state spawning.#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.262 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Instance event wait completed in 16 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.271 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629796.2708197, 691446f5-d3d8-4a4f-a161-f2058a04a59d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.272 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.275 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.285 189512 INFO nova.virt.libvirt.driver [-] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Instance spawned successfully.#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.286 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.292 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.303 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.318 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.319 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.320 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.321 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.322 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.322 189512 DEBUG nova.virt.libvirt.driver [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.327 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.408 189512 INFO nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Took 22.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.410 189512 DEBUG nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.480 189512 INFO nova.compute.manager [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Took 23.35 seconds to build instance.#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.507 189512 DEBUG oslo_concurrency.lockutils [None req-112efd7d-6fe4-4112-a61c-390ee429e63c 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:36 compute-0 nova_compute[189508]: 2025-12-01 22:56:36.851 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:36 compute-0 podman[251789]: 2025-12-01 22:56:36.872358353 +0000 UTC m=+0.132156448 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 22:56:36 compute-0 podman[251788]: 2025-12-01 22:56:36.921749314 +0000 UTC m=+0.185372738 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 22:56:40 compute-0 nova_compute[189508]: 2025-12-01 22:56:40.497 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:41 compute-0 nova_compute[189508]: 2025-12-01 22:56:41.245 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:41 compute-0 podman[251831]: 2025-12-01 22:56:41.809092484 +0000 UTC m=+0.075816971 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 22:56:41 compute-0 podman[251834]: 2025-12-01 22:56:41.818717127 +0000 UTC m=+0.074758671 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc.)
Dec  1 22:56:41 compute-0 nova_compute[189508]: 2025-12-01 22:56:41.845 189512 DEBUG nova.compute.manager [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:41 compute-0 nova_compute[189508]: 2025-12-01 22:56:41.846 189512 DEBUG nova.compute.manager [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing instance network info cache due to event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:56:41 compute-0 nova_compute[189508]: 2025-12-01 22:56:41.847 189512 DEBUG oslo_concurrency.lockutils [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:41 compute-0 nova_compute[189508]: 2025-12-01 22:56:41.847 189512 DEBUG oslo_concurrency.lockutils [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:41 compute-0 nova_compute[189508]: 2025-12-01 22:56:41.848 189512 DEBUG nova.network.neutron [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing network info cache for port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:56:41 compute-0 podman[251832]: 2025-12-01 22:56:41.859699389 +0000 UTC m=+0.125462508 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:56:41 compute-0 podman[251833]: 2025-12-01 22:56:41.86078307 +0000 UTC m=+0.121664281 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec  1 22:56:42 compute-0 ovn_controller[97770]: 2025-12-01T22:56:42Z|00093|binding|INFO|Releasing lport 0bac805e-79cd-4ef5-a08c-830fa9d99912 from this chassis (sb_readonly=0)
Dec  1 22:56:42 compute-0 nova_compute[189508]: 2025-12-01 22:56:42.713 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:43 compute-0 nova_compute[189508]: 2025-12-01 22:56:43.332 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:44 compute-0 nova_compute[189508]: 2025-12-01 22:56:44.071 189512 DEBUG nova.network.neutron [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updated VIF entry in instance network info cache for port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:56:44 compute-0 nova_compute[189508]: 2025-12-01 22:56:44.072 189512 DEBUG nova.network.neutron [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:44 compute-0 nova_compute[189508]: 2025-12-01 22:56:44.091 189512 DEBUG oslo_concurrency.lockutils [req-0bb46ceb-27b9-436e-bd03-5727f8e5925d req-a31e6cf5-1fa1-46d3-8d8e-f10da955b60e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:44 compute-0 nova_compute[189508]: 2025-12-01 22:56:44.820 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:45 compute-0 nova_compute[189508]: 2025-12-01 22:56:45.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:45 compute-0 nova_compute[189508]: 2025-12-01 22:56:45.497 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:46 compute-0 nova_compute[189508]: 2025-12-01 22:56:46.208 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629791.2067914, 43481db0-816b-4096-a511-f46b9a3656d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:56:46 compute-0 nova_compute[189508]: 2025-12-01 22:56:46.209 189512 INFO nova.compute.manager [-] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:56:46 compute-0 nova_compute[189508]: 2025-12-01 22:56:46.240 189512 DEBUG nova.compute.manager [None req-cd286544-d52a-45c2-8148-604436a206f9 - - - - - -] [instance: 43481db0-816b-4096-a511-f46b9a3656d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:56:46 compute-0 nova_compute[189508]: 2025-12-01 22:56:46.249 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:50 compute-0 nova_compute[189508]: 2025-12-01 22:56:50.501 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:51 compute-0 nova_compute[189508]: 2025-12-01 22:56:51.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:51 compute-0 nova_compute[189508]: 2025-12-01 22:56:51.252 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.485 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.486 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.507 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.602 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.603 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.615 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.616 189512 INFO nova.compute.claims [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.751 189512 DEBUG nova.compute.provider_tree [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.766 189512 DEBUG nova.scheduler.client.report [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.786 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.788 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.842 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.843 189512 DEBUG nova.network.neutron [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.865 189512 INFO nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.883 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.985 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.987 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.987 189512 INFO nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Creating image(s)#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.988 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "/var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.989 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "/var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:53 compute-0 nova_compute[189508]: 2025-12-01 22:56:53.990 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "/var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.009 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.097 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.098 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.099 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.116 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.207 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.208 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.231 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.258 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.259 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.260 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.344 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.345 189512 DEBUG nova.virt.disk.api [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Checking if we can resize image /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.346 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.365 189512 DEBUG nova.policy [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4e2efc564e1a42b190b1eec7ab4437ec', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '30e98aa31d6d4f7fa1c36a1e13fde3e4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.406 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.407 189512 DEBUG nova.virt.disk.api [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Cannot resize image /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.408 189512 DEBUG nova.objects.instance [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lazy-loading 'migration_context' on Instance uuid fbf5b185-cbf1-488e-991b-a561cf724f9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.425 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.425 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Ensure instance console log exists: /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.426 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.427 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.427 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.676 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.677 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.677 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:56:54 compute-0 nova_compute[189508]: 2025-12-01 22:56:54.678 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 691446f5-d3d8-4a4f-a161-f2058a04a59d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:56:55 compute-0 nova_compute[189508]: 2025-12-01 22:56:55.504 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:56 compute-0 nova_compute[189508]: 2025-12-01 22:56:56.116 189512 DEBUG nova.network.neutron [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Successfully created port: f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:56:56 compute-0 nova_compute[189508]: 2025-12-01 22:56:56.257 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.561 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.666 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.681 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.682 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.683 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.684 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.684 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.686 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.687 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.688 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.717 189512 DEBUG nova.network.neutron [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Successfully updated port: f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.830 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.831 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquired lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.832 189512 DEBUG nova.network.neutron [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.836 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.837 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.838 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.839 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.911 189512 DEBUG nova.compute.manager [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-changed-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.912 189512 DEBUG nova.compute.manager [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Refreshing instance network info cache due to event network-changed-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.913 189512 DEBUG oslo_concurrency.lockutils [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:56:57 compute-0 nova_compute[189508]: 2025-12-01 22:56:57.946 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.036 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.038 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:56:58 compute-0 podman[251928]: 2025-12-01 22:56:58.040638296 +0000 UTC m=+0.114606600 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.096 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.125 189512 DEBUG nova.network.neutron [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.431 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.433 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5214MB free_disk=72.15806198120117GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.434 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.434 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.526 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 691446f5-d3d8-4a4f-a161-f2058a04a59d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.527 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance fbf5b185-cbf1-488e-991b-a561cf724f9a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.528 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.528 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.612 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.638 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.670 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:56:58 compute-0 nova_compute[189508]: 2025-12-01 22:56:58.671 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:56:59 compute-0 podman[203693]: time="2025-12-01T22:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:56:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:56:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.283 189512 DEBUG nova.network.neutron [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Updating instance_info_cache with network_info: [{"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.323 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Releasing lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.324 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Instance network_info: |[{"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.326 189512 DEBUG oslo_concurrency.lockutils [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.327 189512 DEBUG nova.network.neutron [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Refreshing network info cache for port f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.333 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Start _get_guest_xml network_info=[{"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.346 189512 WARNING nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.354 189512 DEBUG nova.virt.libvirt.host [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.356 189512 DEBUG nova.virt.libvirt.host [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.370 189512 DEBUG nova.virt.libvirt.host [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.371 189512 DEBUG nova.virt.libvirt.host [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.372 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.373 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.375 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.376 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.377 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.379 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.380 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.381 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.382 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.383 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.384 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.385 189512 DEBUG nova.virt.hardware [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.392 189512 DEBUG nova.virt.libvirt.vif [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1900389938',display_name='tempest-ServersTestManualDisk-server-1900389938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1900389938',id=9,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaLsmJG4vwWiNZjgeJrhuIUw802zdKjN36N6c3UsBfD2P4qIGHprwkEBkYg3KUq5Todbt496njxwVABElCJehOn2hYdLkSz75xjbX0QZJdXSQ9Ulz9a7UPzI5PjxZdpHQ==',key_name='tempest-keypair-296854846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30e98aa31d6d4f7fa1c36a1e13fde3e4',ramdisk_id='',reservation_id='r-hmdy6r9h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-22155516',owner_user_name='tempest-ServersTestManualDisk-22155516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4e2efc564e1a42b190b1eec7ab4437ec',uuid=fbf5b185-cbf1-488e-991b-a561cf724f9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.393 189512 DEBUG nova.network.os_vif_util [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Converting VIF {"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.395 189512 DEBUG nova.network.os_vif_util [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:a2,bridge_name='br-int',has_traffic_filtering=True,id=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c,network=Network(c6a7fa95-c3fa-44ca-b41e-76ef382cc755),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f45881-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.396 189512 DEBUG nova.objects.instance [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbf5b185-cbf1-488e-991b-a561cf724f9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.410 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <uuid>fbf5b185-cbf1-488e-991b-a561cf724f9a</uuid>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <name>instance-00000009</name>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <nova:name>tempest-ServersTestManualDisk-server-1900389938</nova:name>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:57:00</nova:creationTime>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:user uuid="4e2efc564e1a42b190b1eec7ab4437ec">tempest-ServersTestManualDisk-22155516-project-member</nova:user>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:project uuid="30e98aa31d6d4f7fa1c36a1e13fde3e4">tempest-ServersTestManualDisk-22155516</nova:project>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        <nova:port uuid="f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c">
Dec  1 22:57:00 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <system>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <entry name="serial">fbf5b185-cbf1-488e-991b-a561cf724f9a</entry>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <entry name="uuid">fbf5b185-cbf1-488e-991b-a561cf724f9a</entry>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </system>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <os>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  </os>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <features>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  </features>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk.config"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:b0:37:a2"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <target dev="tapf5f45881-25"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/console.log" append="off"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <video>
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </video>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:57:00 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:57:00 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:57:00 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:57:00 compute-0 nova_compute[189508]: </domain>
Dec  1 22:57:00 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.422 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Preparing to wait for external event network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.422 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.423 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.423 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.424 189512 DEBUG nova.virt.libvirt.vif [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1900389938',display_name='tempest-ServersTestManualDisk-server-1900389938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1900389938',id=9,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaLsmJG4vwWiNZjgeJrhuIUw802zdKjN36N6c3UsBfD2P4qIGHprwkEBkYg3KUq5Todbt496njxwVABElCJehOn2hYdLkSz75xjbX0QZJdXSQ9Ulz9a7UPzI5PjxZdpHQ==',key_name='tempest-keypair-296854846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30e98aa31d6d4f7fa1c36a1e13fde3e4',ramdisk_id='',reservation_id='r-hmdy6r9h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-22155516',owner_user_name='tempest-ServersTestManualDisk-22155516-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:56:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4e2efc564e1a42b190b1eec7ab4437ec',uuid=fbf5b185-cbf1-488e-991b-a561cf724f9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.424 189512 DEBUG nova.network.os_vif_util [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Converting VIF {"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.424 189512 DEBUG nova.network.os_vif_util [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:a2,bridge_name='br-int',has_traffic_filtering=True,id=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c,network=Network(c6a7fa95-c3fa-44ca-b41e-76ef382cc755),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f45881-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.425 189512 DEBUG os_vif [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:a2,bridge_name='br-int',has_traffic_filtering=True,id=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c,network=Network(c6a7fa95-c3fa-44ca-b41e-76ef382cc755),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f45881-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.425 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.426 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.426 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.433 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.434 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5f45881-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.435 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5f45881-25, col_values=(('external_ids', {'iface-id': 'f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:37:a2', 'vm-uuid': 'fbf5b185-cbf1-488e-991b-a561cf724f9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.437 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.440 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:57:00 compute-0 NetworkManager[56278]: <info>  [1764629820.4401] manager: (tapf5f45881-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.454 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.455 189512 INFO os_vif [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:a2,bridge_name='br-int',has_traffic_filtering=True,id=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c,network=Network(c6a7fa95-c3fa-44ca-b41e-76ef382cc755),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f45881-25')#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.507 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.518 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.519 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.519 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] No VIF found with MAC fa:16:3e:b0:37:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:57:00 compute-0 nova_compute[189508]: 2025-12-01 22:57:00.520 189512 INFO nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Using config drive#033[00m
Dec  1 22:57:00 compute-0 podman[251959]: 2025-12-01 22:57:00.836838851 +0000 UTC m=+0.116599897 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:57:01 compute-0 nova_compute[189508]: 2025-12-01 22:57:01.112 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:01 compute-0 openstack_network_exporter[205887]: ERROR   22:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:57:01 compute-0 openstack_network_exporter[205887]: ERROR   22:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:57:01 compute-0 openstack_network_exporter[205887]: ERROR   22:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:57:01 compute-0 openstack_network_exporter[205887]: ERROR   22:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:57:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:57:01 compute-0 openstack_network_exporter[205887]: ERROR   22:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:57:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:57:01 compute-0 nova_compute[189508]: 2025-12-01 22:57:01.664 189512 INFO nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Creating config drive at /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk.config#033[00m
Dec  1 22:57:01 compute-0 nova_compute[189508]: 2025-12-01 22:57:01.679 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7pinnbi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:01 compute-0 nova_compute[189508]: 2025-12-01 22:57:01.825 189512 DEBUG oslo_concurrency.processutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7pinnbi" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:01 compute-0 podman[251980]: 2025-12-01 22:57:01.872832017 +0000 UTC m=+0.144327624 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec  1 22:57:01 compute-0 kernel: tapf5f45881-25: entered promiscuous mode
Dec  1 22:57:01 compute-0 NetworkManager[56278]: <info>  [1764629821.9126] manager: (tapf5f45881-25): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec  1 22:57:01 compute-0 ovn_controller[97770]: 2025-12-01T22:57:01Z|00094|binding|INFO|Claiming lport f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c for this chassis.
Dec  1 22:57:01 compute-0 ovn_controller[97770]: 2025-12-01T22:57:01Z|00095|binding|INFO|f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c: Claiming fa:16:3e:b0:37:a2 10.100.0.10
Dec  1 22:57:01 compute-0 nova_compute[189508]: 2025-12-01 22:57:01.914 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.932 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:37:a2 10.100.0.10'], port_security=['fa:16:3e:b0:37:a2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbf5b185-cbf1-488e-991b-a561cf724f9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30e98aa31d6d4f7fa1c36a1e13fde3e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77b67f3e-a85e-4200-b4ba-cc2f93b85dbc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12b3f7bc-8990-4c2e-b85b-bb81dc074ebc, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.937 106662 INFO neutron.agent.ovn.metadata.agent [-] Port f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c in datapath c6a7fa95-c3fa-44ca-b41e-76ef382cc755 bound to our chassis#033[00m
Dec  1 22:57:01 compute-0 nova_compute[189508]: 2025-12-01 22:57:01.944 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:01 compute-0 ovn_controller[97770]: 2025-12-01T22:57:01Z|00096|binding|INFO|Setting lport f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c ovn-installed in OVS
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.941 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6a7fa95-c3fa-44ca-b41e-76ef382cc755#033[00m
Dec  1 22:57:01 compute-0 ovn_controller[97770]: 2025-12-01T22:57:01Z|00097|binding|INFO|Setting lport f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c up in Southbound
Dec  1 22:57:01 compute-0 systemd-udevd[252013]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.959 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[27a64311-8e92-4881-9707-d14191f98dca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.961 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6a7fa95-c1 in ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.963 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6a7fa95-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.963 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[623eb0c7-6838-464d-a30f-b4d1e2bd76e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.964 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1c85e9-c32a-469b-b0e5-491411ab27ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:01 compute-0 systemd-machined[155759]: New machine qemu-9-instance-00000009.
Dec  1 22:57:01 compute-0 NetworkManager[56278]: <info>  [1764629821.9745] device (tapf5f45881-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:57:01 compute-0 NetworkManager[56278]: <info>  [1764629821.9756] device (tapf5f45881-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:57:01 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec  1 22:57:01 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:01.978 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[bc17039c-ebf0-42c3-a4bb-929193882853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.004 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec10a0f-a658-4bcd-8eb6-469544da8389]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.051 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[2e799a2c-5737-43ab-949e-f54aa1fe7fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 NetworkManager[56278]: <info>  [1764629822.0618] manager: (tapc6a7fa95-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec  1 22:57:02 compute-0 systemd-udevd[252018]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.062 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d9bf97-0a89-4dee-801b-56739c0f3321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.104 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[2acc2e4c-80ec-43c0-863a-1106b89d6872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.110 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa8cd1b-37ca-4f6b-b48b-c90d9122cbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 NetworkManager[56278]: <info>  [1764629822.1353] device (tapc6a7fa95-c0): carrier: link connected
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.146 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[19bd3446-c7e3-4275-8bcb-f36ecc2905ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.165 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[26f33b63-6a4a-4e40-8310-7d24efa56e63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6a7fa95-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:00:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533170, 'reachable_time': 38653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252047, 'error': None, 'target': 'ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.186 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd72490-d875-43d4-b659-fd02f59faefa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:ff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533170, 'tstamp': 533170}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252051, 'error': None, 'target': 'ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.211 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[4b396d50-8484-4b7a-9c96-3210e8217fe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6a7fa95-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:00:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533170, 'reachable_time': 38653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252055, 'error': None, 'target': 'ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.256 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a436c4f9-eb23-420f-8c9a-bbfbf8015cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.279 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629822.2788012, fbf5b185-cbf1-488e-991b-a561cf724f9a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.279 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] VM Started (Lifecycle Event)#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.298 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.303 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629822.2789035, fbf5b185-cbf1-488e-991b-a561cf724f9a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.303 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.320 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.323 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[29b5dbb7-13ea-4582-9011-f9f540c8ea4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.324 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.325 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6a7fa95-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.326 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.326 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6a7fa95-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.328 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:02 compute-0 NetworkManager[56278]: <info>  [1764629822.3297] manager: (tapc6a7fa95-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec  1 22:57:02 compute-0 kernel: tapc6a7fa95-c0: entered promiscuous mode
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.333 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6a7fa95-c0, col_values=(('external_ids', {'iface-id': '8d2e5941-e4c0-4c22-87b3-f3788b9350e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.333 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.335 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:02 compute-0 ovn_controller[97770]: 2025-12-01T22:57:02Z|00098|binding|INFO|Releasing lport 8d2e5941-e4c0-4c22-87b3-f3788b9350e6 from this chassis (sb_readonly=0)
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.343 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.349 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:02 compute-0 nova_compute[189508]: 2025-12-01 22:57:02.352 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.353 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6a7fa95-c3fa-44ca-b41e-76ef382cc755.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6a7fa95-c3fa-44ca-b41e-76ef382cc755.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.354 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ebff45-d8b8-4cef-821e-54f692a5e102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.355 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-c6a7fa95-c3fa-44ca-b41e-76ef382cc755
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/c6a7fa95-c3fa-44ca-b41e-76ef382cc755.pid.haproxy
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID c6a7fa95-c3fa-44ca-b41e-76ef382cc755
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:57:02 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:02.357 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'env', 'PROCESS_TAG=haproxy-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6a7fa95-c3fa-44ca-b41e-76ef382cc755.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:57:02 compute-0 podman[252084]: 2025-12-01 22:57:02.789819128 +0000 UTC m=+0.093394319 container create b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  1 22:57:02 compute-0 podman[252084]: 2025-12-01 22:57:02.737868285 +0000 UTC m=+0.041443496 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:57:02 compute-0 systemd[1]: Started libpod-conmon-b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0.scope.
Dec  1 22:57:02 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fcc8dfac8eff70954e45506e59e68cd5a50196e3dbd238d39699927835e3a86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:57:02 compute-0 podman[252084]: 2025-12-01 22:57:02.896605536 +0000 UTC m=+0.200180737 container init b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 22:57:02 compute-0 podman[252084]: 2025-12-01 22:57:02.90450498 +0000 UTC m=+0.208080171 container start b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  1 22:57:02 compute-0 neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755[252099]: [NOTICE]   (252103) : New worker (252105) forked
Dec  1 22:57:02 compute-0 neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755[252099]: [NOTICE]   (252103) : Loading success.
Dec  1 22:57:03 compute-0 nova_compute[189508]: 2025-12-01 22:57:03.652 189512 DEBUG nova.network.neutron [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Updated VIF entry in instance network info cache for port f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:57:03 compute-0 nova_compute[189508]: 2025-12-01 22:57:03.652 189512 DEBUG nova.network.neutron [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Updating instance_info_cache with network_info: [{"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:03 compute-0 nova_compute[189508]: 2025-12-01 22:57:03.674 189512 DEBUG oslo_concurrency.lockutils [req-33dea83f-950d-49a9-8921-97b3825fd01e req-bd31abed-fac0-4b10-8e77-68efcbe51c5d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:04.640 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:04.642 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:04.644 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:05 compute-0 nova_compute[189508]: 2025-12-01 22:57:05.438 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:05 compute-0 nova_compute[189508]: 2025-12-01 22:57:05.509 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:07 compute-0 podman[252115]: 2025-12-01 22:57:07.791194721 +0000 UTC m=+0.069258015 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:57:07 compute-0 podman[252114]: 2025-12-01 22:57:07.836535817 +0000 UTC m=+0.117327198 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.115 189512 DEBUG nova.compute.manager [req-df48329f-104d-43cf-8033-4539a3b417a0 req-72cf526e-007e-4f78-8cdb-d15d402fd539 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.116 189512 DEBUG oslo_concurrency.lockutils [req-df48329f-104d-43cf-8033-4539a3b417a0 req-72cf526e-007e-4f78-8cdb-d15d402fd539 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.116 189512 DEBUG oslo_concurrency.lockutils [req-df48329f-104d-43cf-8033-4539a3b417a0 req-72cf526e-007e-4f78-8cdb-d15d402fd539 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.117 189512 DEBUG oslo_concurrency.lockutils [req-df48329f-104d-43cf-8033-4539a3b417a0 req-72cf526e-007e-4f78-8cdb-d15d402fd539 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.117 189512 DEBUG nova.compute.manager [req-df48329f-104d-43cf-8033-4539a3b417a0 req-72cf526e-007e-4f78-8cdb-d15d402fd539 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Processing event network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.118 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.124 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.124 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629828.1240244, fbf5b185-cbf1-488e-991b-a561cf724f9a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.124 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.130 189512 INFO nova.virt.libvirt.driver [-] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Instance spawned successfully.#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.130 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.148 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.156 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.156 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.157 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.158 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.158 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.159 189512 DEBUG nova.virt.libvirt.driver [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.165 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.189 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.218 189512 INFO nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Took 14.23 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.219 189512 DEBUG nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.304 189512 INFO nova.compute.manager [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Took 14.74 seconds to build instance.#033[00m
Dec  1 22:57:08 compute-0 nova_compute[189508]: 2025-12-01 22:57:08.325 189512 DEBUG oslo_concurrency.lockutils [None req-3f66ed20-91c7-48fc-9383-eda5ed035858 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.224 189512 DEBUG nova.compute.manager [req-661bd13f-8b67-4051-aec2-9f02f0b4e26e req-8215588f-e6aa-4b4b-9b83-d08ca887cdec c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.226 189512 DEBUG oslo_concurrency.lockutils [req-661bd13f-8b67-4051-aec2-9f02f0b4e26e req-8215588f-e6aa-4b4b-9b83-d08ca887cdec c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.226 189512 DEBUG oslo_concurrency.lockutils [req-661bd13f-8b67-4051-aec2-9f02f0b4e26e req-8215588f-e6aa-4b4b-9b83-d08ca887cdec c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.227 189512 DEBUG oslo_concurrency.lockutils [req-661bd13f-8b67-4051-aec2-9f02f0b4e26e req-8215588f-e6aa-4b4b-9b83-d08ca887cdec c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.227 189512 DEBUG nova.compute.manager [req-661bd13f-8b67-4051-aec2-9f02f0b4e26e req-8215588f-e6aa-4b4b-9b83-d08ca887cdec c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] No waiting events found dispatching network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.228 189512 WARNING nova.compute.manager [req-661bd13f-8b67-4051-aec2-9f02f0b4e26e req-8215588f-e6aa-4b4b-9b83-d08ca887cdec c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received unexpected event network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c for instance with vm_state active and task_state None.#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.441 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:10 compute-0 nova_compute[189508]: 2025-12-01 22:57:10.511 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:11 compute-0 ovn_controller[97770]: 2025-12-01T22:57:11Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:0a:ea 10.100.0.11
Dec  1 22:57:11 compute-0 ovn_controller[97770]: 2025-12-01T22:57:11Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:0a:ea 10.100.0.11
Dec  1 22:57:12 compute-0 nova_compute[189508]: 2025-12-01 22:57:12.727 189512 DEBUG nova.compute.manager [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-changed-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:12 compute-0 nova_compute[189508]: 2025-12-01 22:57:12.727 189512 DEBUG nova.compute.manager [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Refreshing instance network info cache due to event network-changed-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:57:12 compute-0 nova_compute[189508]: 2025-12-01 22:57:12.728 189512 DEBUG oslo_concurrency.lockutils [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:12 compute-0 nova_compute[189508]: 2025-12-01 22:57:12.728 189512 DEBUG oslo_concurrency.lockutils [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:12 compute-0 nova_compute[189508]: 2025-12-01 22:57:12.728 189512 DEBUG nova.network.neutron [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Refreshing network info cache for port f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:57:12 compute-0 podman[252169]: 2025-12-01 22:57:12.816917075 +0000 UTC m=+0.098286468 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:57:12 compute-0 podman[252168]: 2025-12-01 22:57:12.817973055 +0000 UTC m=+0.098145434 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:57:12 compute-0 podman[252170]: 2025-12-01 22:57:12.849818158 +0000 UTC m=+0.111059790 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec  1 22:57:12 compute-0 podman[252171]: 2025-12-01 22:57:12.879706116 +0000 UTC m=+0.144854099 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, name=ubi9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.818 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.942 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.943 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.944 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.945 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.946 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.947 189512 INFO nova.compute.manager [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Terminating instance#033[00m
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.949 189512 DEBUG nova.compute.manager [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:57:13 compute-0 kernel: tapf5f45881-25 (unregistering): left promiscuous mode
Dec  1 22:57:13 compute-0 NetworkManager[56278]: <info>  [1764629833.9837] device (tapf5f45881-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.990 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:13 compute-0 ovn_controller[97770]: 2025-12-01T22:57:13Z|00099|binding|INFO|Releasing lport f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c from this chassis (sb_readonly=0)
Dec  1 22:57:13 compute-0 ovn_controller[97770]: 2025-12-01T22:57:13Z|00100|binding|INFO|Setting lport f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c down in Southbound
Dec  1 22:57:13 compute-0 ovn_controller[97770]: 2025-12-01T22:57:13Z|00101|binding|INFO|Removing iface tapf5f45881-25 ovn-installed in OVS
Dec  1 22:57:13 compute-0 nova_compute[189508]: 2025-12-01 22:57:13.994 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:13.999 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:37:a2 10.100.0.10'], port_security=['fa:16:3e:b0:37:a2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbf5b185-cbf1-488e-991b-a561cf724f9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30e98aa31d6d4f7fa1c36a1e13fde3e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77b67f3e-a85e-4200-b4ba-cc2f93b85dbc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12b3f7bc-8990-4c2e-b85b-bb81dc074ebc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.000 106662 INFO neutron.agent.ovn.metadata.agent [-] Port f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c in datapath c6a7fa95-c3fa-44ca-b41e-76ef382cc755 unbound from our chassis#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.002 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6a7fa95-c3fa-44ca-b41e-76ef382cc755, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.003 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0719c563-fa36-46e2-be38-9bea47573368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.004 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755 namespace which is not needed anymore#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.010 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:14 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  1 22:57:14 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 6.467s CPU time.
Dec  1 22:57:14 compute-0 systemd-machined[155759]: Machine qemu-9-instance-00000009 terminated.
Dec  1 22:57:14 compute-0 neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755[252099]: [NOTICE]   (252103) : haproxy version is 2.8.14-c23fe91
Dec  1 22:57:14 compute-0 neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755[252099]: [NOTICE]   (252103) : path to executable is /usr/sbin/haproxy
Dec  1 22:57:14 compute-0 neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755[252099]: [WARNING]  (252103) : Exiting Master process...
Dec  1 22:57:14 compute-0 neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755[252099]: [ALERT]    (252103) : Current worker (252105) exited with code 143 (Terminated)
Dec  1 22:57:14 compute-0 neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755[252099]: [WARNING]  (252103) : All workers exited. Exiting... (0)
Dec  1 22:57:14 compute-0 systemd[1]: libpod-b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0.scope: Deactivated successfully.
Dec  1 22:57:14 compute-0 podman[252267]: 2025-12-01 22:57:14.225817245 +0000 UTC m=+0.082294855 container died b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.242 189512 INFO nova.virt.libvirt.driver [-] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Instance destroyed successfully.#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.243 189512 DEBUG nova.objects.instance [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lazy-loading 'resources' on Instance uuid fbf5b185-cbf1-488e-991b-a561cf724f9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.262 189512 DEBUG nova.virt.libvirt.vif [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1900389938',display_name='tempest-ServersTestManualDisk-server-1900389938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1900389938',id=9,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaLsmJG4vwWiNZjgeJrhuIUw802zdKjN36N6c3UsBfD2P4qIGHprwkEBkYg3KUq5Todbt496njxwVABElCJehOn2hYdLkSz75xjbX0QZJdXSQ9Ulz9a7UPzI5PjxZdpHQ==',key_name='tempest-keypair-296854846',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:57:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30e98aa31d6d4f7fa1c36a1e13fde3e4',ramdisk_id='',reservation_id='r-hmdy6r9h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-22155516',owner_user_name='tempest-ServersTestManualDisk-22155516-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:57:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4e2efc564e1a42b190b1eec7ab4437ec',uuid=fbf5b185-cbf1-488e-991b-a561cf724f9a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.263 189512 DEBUG nova.network.os_vif_util [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Converting VIF {"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.264 189512 DEBUG nova.network.os_vif_util [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:a2,bridge_name='br-int',has_traffic_filtering=True,id=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c,network=Network(c6a7fa95-c3fa-44ca-b41e-76ef382cc755),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f45881-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.265 189512 DEBUG os_vif [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:a2,bridge_name='br-int',has_traffic_filtering=True,id=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c,network=Network(c6a7fa95-c3fa-44ca-b41e-76ef382cc755),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f45881-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.267 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.267 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5f45881-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0-userdata-shm.mount: Deactivated successfully.
Dec  1 22:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fcc8dfac8eff70954e45506e59e68cd5a50196e3dbd238d39699927835e3a86-merged.mount: Deactivated successfully.
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.272 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.275 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.279 189512 INFO os_vif [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:a2,bridge_name='br-int',has_traffic_filtering=True,id=f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c,network=Network(c6a7fa95-c3fa-44ca-b41e-76ef382cc755),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5f45881-25')#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.280 189512 INFO nova.virt.libvirt.driver [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Deleting instance files /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a_del#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.281 189512 INFO nova.virt.libvirt.driver [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Deletion of /var/lib/nova/instances/fbf5b185-cbf1-488e-991b-a561cf724f9a_del complete#033[00m
Dec  1 22:57:14 compute-0 podman[252267]: 2025-12-01 22:57:14.285566509 +0000 UTC m=+0.142044149 container cleanup b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:57:14 compute-0 systemd[1]: libpod-conmon-b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0.scope: Deactivated successfully.
Dec  1 22:57:14 compute-0 podman[252311]: 2025-12-01 22:57:14.377466595 +0000 UTC m=+0.061048782 container remove b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.386 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c72fae07-3380-46b4-ae0e-308dec3b7f3b]: (4, ('Mon Dec  1 10:57:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755 (b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0)\nb0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0\nMon Dec  1 10:57:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755 (b0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0)\nb0950f2cab12eba550cefa169ffa3f778c52f8d1884b02a8b21f2a03e21fe0b0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.389 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[88a9c537-6132-4714-8fd7-5545f488939f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.391 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6a7fa95-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.394 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:14 compute-0 kernel: tapc6a7fa95-c0: left promiscuous mode
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.403 189512 INFO nova.compute.manager [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.403 189512 DEBUG oslo.service.loopingcall [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.404 189512 DEBUG nova.compute.manager [-] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.404 189512 DEBUG nova.network.neutron [-] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.408 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.411 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[048bb55c-3445-4e4d-ac48-522bd82493fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.423 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[22943444-e94a-4e03-a16d-2f894f2e3262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.425 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b3cc729d-b9ea-4230-a0b4-f72552cb19f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.441 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0540380b-7962-4c7e-9a2d-de09c8ddfa0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533161, 'reachable_time': 36354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252326, 'error': None, 'target': 'ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6a7fa95\x2dc3fa\x2d44ca\x2db41e\x2d76ef382cc755.mount: Deactivated successfully.
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.445 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6a7fa95-c3fa-44ca-b41e-76ef382cc755 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:57:14 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:14.445 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[69937b16-64a0-4486-a631-f0fa0ec5f5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.781 189512 DEBUG nova.compute.manager [req-11f39f41-841f-489d-ba89-8c9e6acc251e req-eb66dd80-6260-4c4e-a4a1-9c7fde0a304a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-vif-unplugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.782 189512 DEBUG oslo_concurrency.lockutils [req-11f39f41-841f-489d-ba89-8c9e6acc251e req-eb66dd80-6260-4c4e-a4a1-9c7fde0a304a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.783 189512 DEBUG oslo_concurrency.lockutils [req-11f39f41-841f-489d-ba89-8c9e6acc251e req-eb66dd80-6260-4c4e-a4a1-9c7fde0a304a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.784 189512 DEBUG oslo_concurrency.lockutils [req-11f39f41-841f-489d-ba89-8c9e6acc251e req-eb66dd80-6260-4c4e-a4a1-9c7fde0a304a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.785 189512 DEBUG nova.compute.manager [req-11f39f41-841f-489d-ba89-8c9e6acc251e req-eb66dd80-6260-4c4e-a4a1-9c7fde0a304a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] No waiting events found dispatching network-vif-unplugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:57:14 compute-0 nova_compute[189508]: 2025-12-01 22:57:14.785 189512 DEBUG nova.compute.manager [req-11f39f41-841f-489d-ba89-8c9e6acc251e req-eb66dd80-6260-4c4e-a4a1-9c7fde0a304a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-vif-unplugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:57:15 compute-0 ovn_controller[97770]: 2025-12-01T22:57:15Z|00102|binding|INFO|Releasing lport 0bac805e-79cd-4ef5-a08c-830fa9d99912 from this chassis (sb_readonly=0)
Dec  1 22:57:15 compute-0 nova_compute[189508]: 2025-12-01 22:57:15.463 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:15 compute-0 nova_compute[189508]: 2025-12-01 22:57:15.514 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:15 compute-0 nova_compute[189508]: 2025-12-01 22:57:15.680 189512 DEBUG nova.network.neutron [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Updated VIF entry in instance network info cache for port f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:57:15 compute-0 nova_compute[189508]: 2025-12-01 22:57:15.681 189512 DEBUG nova.network.neutron [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Updating instance_info_cache with network_info: [{"id": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "address": "fa:16:3e:b0:37:a2", "network": {"id": "c6a7fa95-c3fa-44ca-b41e-76ef382cc755", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-328171273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30e98aa31d6d4f7fa1c36a1e13fde3e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5f45881-25", "ovs_interfaceid": "f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:15 compute-0 nova_compute[189508]: 2025-12-01 22:57:15.701 189512 DEBUG oslo_concurrency.lockutils [req-64b28859-71b4-4e8a-b2a8-b30f46cd13a1 req-d02ff57d-e4a6-47b8-8bf3-3f4ece3ba662 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-fbf5b185-cbf1-488e-991b-a561cf724f9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:16.870 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:57:16 compute-0 nova_compute[189508]: 2025-12-01 22:57:16.871 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:16 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:16.874 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:57:16 compute-0 nova_compute[189508]: 2025-12-01 22:57:16.930 189512 DEBUG nova.compute.manager [req-d71e8227-bb09-4fa6-a922-c6aa9055096a req-a64a6352-2789-45b0-bc43-fd58d5e2df6e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:16 compute-0 nova_compute[189508]: 2025-12-01 22:57:16.931 189512 DEBUG oslo_concurrency.lockutils [req-d71e8227-bb09-4fa6-a922-c6aa9055096a req-a64a6352-2789-45b0-bc43-fd58d5e2df6e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:16 compute-0 nova_compute[189508]: 2025-12-01 22:57:16.931 189512 DEBUG oslo_concurrency.lockutils [req-d71e8227-bb09-4fa6-a922-c6aa9055096a req-a64a6352-2789-45b0-bc43-fd58d5e2df6e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:16 compute-0 nova_compute[189508]: 2025-12-01 22:57:16.932 189512 DEBUG oslo_concurrency.lockutils [req-d71e8227-bb09-4fa6-a922-c6aa9055096a req-a64a6352-2789-45b0-bc43-fd58d5e2df6e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:16 compute-0 nova_compute[189508]: 2025-12-01 22:57:16.932 189512 DEBUG nova.compute.manager [req-d71e8227-bb09-4fa6-a922-c6aa9055096a req-a64a6352-2789-45b0-bc43-fd58d5e2df6e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] No waiting events found dispatching network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:57:16 compute-0 nova_compute[189508]: 2025-12-01 22:57:16.932 189512 WARNING nova.compute.manager [req-d71e8227-bb09-4fa6-a922-c6aa9055096a req-a64a6352-2789-45b0-bc43-fd58d5e2df6e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received unexpected event network-vif-plugged-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c for instance with vm_state active and task_state deleting.#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.200 189512 DEBUG nova.network.neutron [-] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.217 189512 INFO nova.compute.manager [-] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Took 2.81 seconds to deallocate network for instance.#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.273 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.274 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.628 189512 DEBUG nova.compute.provider_tree [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.646 189512 DEBUG nova.scheduler.client.report [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.665 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.692 189512 INFO nova.scheduler.client.report [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Deleted allocations for instance fbf5b185-cbf1-488e-991b-a561cf724f9a#033[00m
Dec  1 22:57:17 compute-0 nova_compute[189508]: 2025-12-01 22:57:17.757 189512 DEBUG oslo_concurrency.lockutils [None req-c73cd0c4-ed62-4dbe-a6f6-ee0cdad37fb3 4e2efc564e1a42b190b1eec7ab4437ec 30e98aa31d6d4f7fa1c36a1e13fde3e4 - - default default] Lock "fbf5b185-cbf1-488e-991b-a561cf724f9a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:19 compute-0 nova_compute[189508]: 2025-12-01 22:57:19.274 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:19 compute-0 nova_compute[189508]: 2025-12-01 22:57:19.423 189512 DEBUG nova.compute.manager [req-6229e857-fea0-4ef2-862f-a82135fa379c req-f03d439b-fadf-47b3-a33f-9bdb243245b8 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Received event network-vif-deleted-f5f45881-25e4-423e-9dcf-0ca8b3ad3a6c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:20 compute-0 nova_compute[189508]: 2025-12-01 22:57:20.517 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:20 compute-0 nova_compute[189508]: 2025-12-01 22:57:20.811 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:24 compute-0 nova_compute[189508]: 2025-12-01 22:57:24.285 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:24.879 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:25 compute-0 nova_compute[189508]: 2025-12-01 22:57:25.521 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:28 compute-0 podman[252328]: 2025-12-01 22:57:28.823132594 +0000 UTC m=+0.092997868 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:57:29 compute-0 ovn_controller[97770]: 2025-12-01T22:57:29Z|00103|binding|INFO|Releasing lport 0bac805e-79cd-4ef5-a08c-830fa9d99912 from this chassis (sb_readonly=0)
Dec  1 22:57:29 compute-0 nova_compute[189508]: 2025-12-01 22:57:29.240 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629834.237621, fbf5b185-cbf1-488e-991b-a561cf724f9a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:57:29 compute-0 nova_compute[189508]: 2025-12-01 22:57:29.241 189512 INFO nova.compute.manager [-] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:57:29 compute-0 nova_compute[189508]: 2025-12-01 22:57:29.272 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:29 compute-0 nova_compute[189508]: 2025-12-01 22:57:29.289 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:29 compute-0 nova_compute[189508]: 2025-12-01 22:57:29.292 189512 DEBUG nova.compute.manager [None req-90c97ce6-dfcf-4a65-bc7d-d6020097b4f8 - - - - - -] [instance: fbf5b185-cbf1-488e-991b-a561cf724f9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:29 compute-0 podman[203693]: time="2025-12-01T22:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:57:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  1 22:57:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 22:57:30 compute-0 nova_compute[189508]: 2025-12-01 22:57:30.525 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:31 compute-0 openstack_network_exporter[205887]: ERROR   22:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:57:31 compute-0 openstack_network_exporter[205887]: ERROR   22:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:57:31 compute-0 openstack_network_exporter[205887]: ERROR   22:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:57:31 compute-0 openstack_network_exporter[205887]: ERROR   22:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:57:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:57:31 compute-0 openstack_network_exporter[205887]: ERROR   22:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:57:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:57:31 compute-0 podman[252353]: 2025-12-01 22:57:31.858131321 +0000 UTC m=+0.122706870 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd)
Dec  1 22:57:32 compute-0 podman[252372]: 2025-12-01 22:57:32.814371914 +0000 UTC m=+0.092230305 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:57:34 compute-0 nova_compute[189508]: 2025-12-01 22:57:34.292 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.275 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.275 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03f20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.282 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 691446f5-d3d8-4a4f-a161-f2058a04a59d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:57:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:35.283 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/691446f5-d3d8-4a4f-a161-f2058a04a59d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:57:35 compute-0 nova_compute[189508]: 2025-12-01 22:57:35.527 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.774 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1997 Content-Type: application/json Date: Mon, 01 Dec 2025 22:57:36 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4de099b9-817f-435e-b594-469712bce262 x-openstack-request-id: req-4de099b9-817f-435e-b594-469712bce262 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.774 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "691446f5-d3d8-4a4f-a161-f2058a04a59d", "name": "tempest-AttachInterfacesUnderV243Test-server-871685025", "status": "ACTIVE", "tenant_id": "5dde91941cac4081b671670d9a874621", "user_id": "9177a32b390447b1acbb338fbf90b4bc", "metadata": {}, "hostId": "29ad421cb9b3b7c2a60b6f4b5d034cd83cd01fba62f60fe162774580", "image": {"id": "74bb08bf-1799-4930-aad4-d505f26ff5f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/74bb08bf-1799-4930-aad4-d505f26ff5f4"}]}, "flavor": {"id": "2e42a55e-71e2-4041-8ca2-725d63f058bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2e42a55e-71e2-4041-8ca2-725d63f058bf"}]}, "created": "2025-12-01T22:56:12Z", "updated": "2025-12-01T22:56:36Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1252852700-network": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ad:0a:ea"}, {"version": 4, "addr": "192.168.122.239", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ad:0a:ea"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/691446f5-d3d8-4a4f-a161-f2058a04a59d"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/691446f5-d3d8-4a4f-a161-f2058a04a59d"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1770308231", "OS-SRV-USG:launched_at": "2025-12-01T22:56:36.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1802778051"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.774 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/691446f5-d3d8-4a4f-a161-f2058a04a59d used request id req-4de099b9-817f-435e-b594-469712bce262 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.776 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '691446f5-d3d8-4a4f-a161-f2058a04a59d', 'name': 'tempest-AttachInterfacesUnderV243Test-server-871685025', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '5dde91941cac4081b671670d9a874621', 'user_id': '9177a32b390447b1acbb338fbf90b4bc', 'hostId': '29ad421cb9b3b7c2a60b6f4b5d034cd83cd01fba62f60fe162774580', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.776 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:57:37.777063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.782 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 691446f5-d3d8-4a4f-a161-f2058a04a59d / tap2c9e194a-9e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.782 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.783 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.783 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.783 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:57:37.783545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.784 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.784 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.784 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.784 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.784 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:57:37.784775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.785 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.785 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.785 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.785 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.785 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.786 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:57:37.785878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.806 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.807 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.808 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.808 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.808 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.808 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.809 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:57:37.808512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.849 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.read.bytes volume: 31001088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.850 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.851 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.read.latency volume: 503673283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.851 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.read.latency volume: 63840540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:57:37.851027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.852 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.852 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.852 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.read.requests volume: 1131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:57:37.852240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.853 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.854 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.write.bytes volume: 72941568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:57:37.853343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:57:37.854429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.write.latency volume: 3567580177 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:57:37.855618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:57:37.856563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:57:37.857604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 nova_compute[189508]: 2025-12-01 22:57:37.883 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:37 compute-0 nova_compute[189508]: 2025-12-01 22:57:37.884 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.890 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/cpu volume: 34470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.891 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.891 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.891 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:57:37.891811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.892 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:57:37.892771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.write.requests volume: 301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:57:37.893673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T22:57:37.894744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.894 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-871685025>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-871685025>]
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:57:37.895613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:57:37.896463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:57:37.897347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.899 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:57:37.898179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:57:37.899017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:57:37.899841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.900 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:57:37.900772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/memory.usage volume: 42.70703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:57:37.901634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-871685025>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-871685025>]
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T22:57:37.902554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.903 14 DEBUG ceilometer.compute.pollsters [-] 691446f5-d3d8-4a4f-a161-f2058a04a59d/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:57:37.903435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.905 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.906 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.906 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.906 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:57:37.906 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:57:37 compute-0 nova_compute[189508]: 2025-12-01 22:57:37.907 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.164 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.164 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.171 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.172 189512 INFO nova.compute.claims [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.348 189512 DEBUG nova.compute.provider_tree [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.362 189512 DEBUG nova.scheduler.client.report [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.388 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.389 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.459 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.459 189512 DEBUG nova.network.neutron [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.485 189512 INFO nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.503 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.604 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.605 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.605 189512 INFO nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Creating image(s)#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.606 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "/var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.606 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "/var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.607 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "/var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.621 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.717 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.718 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.719 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.736 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.801 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.802 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.857 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.859 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.859 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:38 compute-0 podman[252392]: 2025-12-01 22:57:38.862675012 +0000 UTC m=+0.135056530 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec  1 22:57:38 compute-0 podman[252393]: 2025-12-01 22:57:38.864425952 +0000 UTC m=+0.132715334 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.932 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.934 189512 DEBUG nova.virt.disk.api [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Checking if we can resize image /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.935 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.993 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:38 compute-0 nova_compute[189508]: 2025-12-01 22:57:38.997 189512 DEBUG nova.virt.disk.api [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Cannot resize image /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.000 189512 DEBUG nova.objects.instance [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lazy-loading 'migration_context' on Instance uuid 6a2b0a2e-1144-4264-917f-086024e18bed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.018 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.018 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Ensure instance console log exists: /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.019 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.019 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.020 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.186 189512 DEBUG nova.policy [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '786ce878f1d2401ab2375f67e5ebd78b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43a7ae6a25114fd199de68dfe3d3217b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:57:39 compute-0 nova_compute[189508]: 2025-12-01 22:57:39.295 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:40 compute-0 nova_compute[189508]: 2025-12-01 22:57:40.530 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:40 compute-0 nova_compute[189508]: 2025-12-01 22:57:40.997 189512 DEBUG nova.network.neutron [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Successfully created port: 02f1eac6-306c-4fa9-82c7-6e9082828c65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:57:41 compute-0 nova_compute[189508]: 2025-12-01 22:57:41.113 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.293 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.866 189512 DEBUG nova.network.neutron [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Successfully updated port: 02f1eac6-306c-4fa9-82c7-6e9082828c65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.882 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.882 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquired lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.883 189512 DEBUG nova.network.neutron [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.979 189512 DEBUG nova.compute.manager [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-changed-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.981 189512 DEBUG nova.compute.manager [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Refreshing instance network info cache due to event network-changed-02f1eac6-306c-4fa9-82c7-6e9082828c65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:57:42 compute-0 nova_compute[189508]: 2025-12-01 22:57:42.982 189512 DEBUG oslo_concurrency.lockutils [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:43 compute-0 nova_compute[189508]: 2025-12-01 22:57:43.086 189512 DEBUG nova.network.neutron [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:57:43 compute-0 podman[252451]: 2025-12-01 22:57:43.836219395 +0000 UTC m=+0.100728257 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:57:43 compute-0 podman[252453]: 2025-12-01 22:57:43.844449538 +0000 UTC m=+0.097686491 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vcs-type=git, config_id=edpm, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec  1 22:57:43 compute-0 podman[252454]: 2025-12-01 22:57:43.857262511 +0000 UTC m=+0.104821413 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, name=ubi9, release=1214.1726694543)
Dec  1 22:57:43 compute-0 podman[252452]: 2025-12-01 22:57:43.898950573 +0000 UTC m=+0.149744757 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 22:57:44 compute-0 nova_compute[189508]: 2025-12-01 22:57:44.300 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.383 189512 DEBUG nova.network.neutron [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updating instance_info_cache with network_info: [{"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.516 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Releasing lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.516 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Instance network_info: |[{"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.517 189512 DEBUG oslo_concurrency.lockutils [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.517 189512 DEBUG nova.network.neutron [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Refreshing network info cache for port 02f1eac6-306c-4fa9-82c7-6e9082828c65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.519 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Start _get_guest_xml network_info=[{"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.532 189512 WARNING nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.533 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.540 189512 DEBUG nova.virt.libvirt.host [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.541 189512 DEBUG nova.virt.libvirt.host [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.546 189512 DEBUG nova.virt.libvirt.host [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.546 189512 DEBUG nova.virt.libvirt.host [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.547 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.548 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.548 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.549 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.549 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.550 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.550 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.551 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.551 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.552 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.552 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.553 189512 DEBUG nova.virt.hardware [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.557 189512 DEBUG nova.virt.libvirt.vif [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:57:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960241782',display_name='tempest-TestNetworkBasicOps-server-1960241782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960241782',id=10,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhExVoUayFMe+jrrrTAUsXIJCndRWHxq1SKk64GclRI1Ri0NLopX756w2GxPIq7V/BCaKXA48bYoWHaVL6kcj1zZ+n+zH01SVT7NBtNAfvGLVXZdp1srCd+VlTCV1sUJw==',key_name='tempest-TestNetworkBasicOps-894511931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43a7ae6a25114fd199de68dfe3d3217b',ramdisk_id='',reservation_id='r-0jnsvsjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1418827846',owner_user_name='tempest-TestNetworkBasicOps-1418827846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:57:38Z,user_data=None,user_id='786ce878f1d2401ab2375f67e5ebd78b',uuid=6a2b0a2e-1144-4264-917f-086024e18bed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.558 189512 DEBUG nova.network.os_vif_util [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converting VIF {"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.558 189512 DEBUG nova.network.os_vif_util [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:a6,bridge_name='br-int',has_traffic_filtering=True,id=02f1eac6-306c-4fa9-82c7-6e9082828c65,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f1eac6-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.560 189512 DEBUG nova.objects.instance [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a2b0a2e-1144-4264-917f-086024e18bed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.576 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <uuid>6a2b0a2e-1144-4264-917f-086024e18bed</uuid>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <name>instance-0000000a</name>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <nova:name>tempest-TestNetworkBasicOps-server-1960241782</nova:name>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:57:45</nova:creationTime>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:user uuid="786ce878f1d2401ab2375f67e5ebd78b">tempest-TestNetworkBasicOps-1418827846-project-member</nova:user>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:project uuid="43a7ae6a25114fd199de68dfe3d3217b">tempest-TestNetworkBasicOps-1418827846</nova:project>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        <nova:port uuid="02f1eac6-306c-4fa9-82c7-6e9082828c65">
Dec  1 22:57:45 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <system>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <entry name="serial">6a2b0a2e-1144-4264-917f-086024e18bed</entry>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <entry name="uuid">6a2b0a2e-1144-4264-917f-086024e18bed</entry>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </system>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <os>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  </os>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <features>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  </features>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk.config"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:67:9d:a6"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <target dev="tap02f1eac6-30"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/console.log" append="off"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <video>
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </video>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:57:45 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:57:45 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:57:45 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:57:45 compute-0 nova_compute[189508]: </domain>
Dec  1 22:57:45 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.577 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Preparing to wait for external event network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.578 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.578 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.578 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.579 189512 DEBUG nova.virt.libvirt.vif [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:57:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960241782',display_name='tempest-TestNetworkBasicOps-server-1960241782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960241782',id=10,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhExVoUayFMe+jrrrTAUsXIJCndRWHxq1SKk64GclRI1Ri0NLopX756w2GxPIq7V/BCaKXA48bYoWHaVL6kcj1zZ+n+zH01SVT7NBtNAfvGLVXZdp1srCd+VlTCV1sUJw==',key_name='tempest-TestNetworkBasicOps-894511931',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43a7ae6a25114fd199de68dfe3d3217b',ramdisk_id='',reservation_id='r-0jnsvsjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1418827846',owner_user_name='tempest-TestNetworkBasicOps-1418827846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:57:38Z,user_data=None,user_id='786ce878f1d2401ab2375f67e5ebd78b',uuid=6a2b0a2e-1144-4264-917f-086024e18bed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.579 189512 DEBUG nova.network.os_vif_util [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converting VIF {"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.580 189512 DEBUG nova.network.os_vif_util [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:a6,bridge_name='br-int',has_traffic_filtering=True,id=02f1eac6-306c-4fa9-82c7-6e9082828c65,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f1eac6-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.580 189512 DEBUG os_vif [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:a6,bridge_name='br-int',has_traffic_filtering=True,id=02f1eac6-306c-4fa9-82c7-6e9082828c65,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f1eac6-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.581 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.581 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.581 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.584 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.584 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02f1eac6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.584 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02f1eac6-30, col_values=(('external_ids', {'iface-id': '02f1eac6-306c-4fa9-82c7-6e9082828c65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:9d:a6', 'vm-uuid': '6a2b0a2e-1144-4264-917f-086024e18bed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.586 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:45 compute-0 NetworkManager[56278]: <info>  [1764629865.5879] manager: (tap02f1eac6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.588 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.596 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.597 189512 INFO os_vif [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9d:a6,bridge_name='br-int',has_traffic_filtering=True,id=02f1eac6-306c-4fa9-82c7-6e9082828c65,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f1eac6-30')#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.663 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.663 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.664 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] No VIF found with MAC fa:16:3e:67:9d:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:57:45 compute-0 nova_compute[189508]: 2025-12-01 22:57:45.664 189512 INFO nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Using config drive#033[00m
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.431 189512 INFO nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Creating config drive at /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk.config#033[00m
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.440 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeyexgjtm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.568 189512 DEBUG oslo_concurrency.processutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeyexgjtm" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.577 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:46 compute-0 kernel: tap02f1eac6-30: entered promiscuous mode
Dec  1 22:57:46 compute-0 NetworkManager[56278]: <info>  [1764629866.6668] manager: (tap02f1eac6-30): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.670 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.677 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:46 compute-0 ovn_controller[97770]: 2025-12-01T22:57:46Z|00104|binding|INFO|Claiming lport 02f1eac6-306c-4fa9-82c7-6e9082828c65 for this chassis.
Dec  1 22:57:46 compute-0 ovn_controller[97770]: 2025-12-01T22:57:46Z|00105|binding|INFO|02f1eac6-306c-4fa9-82c7-6e9082828c65: Claiming fa:16:3e:67:9d:a6 10.100.0.10
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.686 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9d:a6 10.100.0.10'], port_security=['fa:16:3e:67:9d:a6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6a2b0a2e-1144-4264-917f-086024e18bed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-513808ab-c863-4790-88e3-b64040a0ed8a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43a7ae6a25114fd199de68dfe3d3217b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8e736c0-3ac7-45a4-b71c-33bc93594c74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e643dba6-de01-4938-9750-33d8ce8dfa77, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=02f1eac6-306c-4fa9-82c7-6e9082828c65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.689 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 02f1eac6-306c-4fa9-82c7-6e9082828c65 in datapath 513808ab-c863-4790-88e3-b64040a0ed8a bound to our chassis#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.693 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 513808ab-c863-4790-88e3-b64040a0ed8a#033[00m
Dec  1 22:57:46 compute-0 ovn_controller[97770]: 2025-12-01T22:57:46Z|00106|binding|INFO|Setting lport 02f1eac6-306c-4fa9-82c7-6e9082828c65 ovn-installed in OVS
Dec  1 22:57:46 compute-0 ovn_controller[97770]: 2025-12-01T22:57:46Z|00107|binding|INFO|Setting lport 02f1eac6-306c-4fa9-82c7-6e9082828c65 up in Southbound
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.696 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:46 compute-0 nova_compute[189508]: 2025-12-01 22:57:46.703 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.705 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f33f90c6-c831-4944-8ef5-805b51165459]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.706 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap513808ab-c1 in ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.709 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap513808ab-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.709 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b7982802-e1c8-44a2-9398-7596ea79d596]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.711 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[05bac846-f129-4965-9992-3975269827d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.724 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[453f1f10-e92c-4d40-858e-660d9d51a9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 systemd-machined[155759]: New machine qemu-10-instance-0000000a.
Dec  1 22:57:46 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.745 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[23890a2e-63a8-4405-8440-1d2f8ad3fbd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 systemd-udevd[252558]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.775 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff63fd5-d005-4a9a-8452-1de0a4555aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 NetworkManager[56278]: <info>  [1764629866.7787] device (tap02f1eac6-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:57:46 compute-0 NetworkManager[56278]: <info>  [1764629866.7827] device (tap02f1eac6-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.786 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba10ef4-d44b-415c-9651-8e876a538a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 NetworkManager[56278]: <info>  [1764629866.7881] manager: (tap513808ab-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.815 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[95e7c31d-dc86-433d-bde2-681b692752c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.819 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6540fd-17cd-4e9f-88d8-f16495692578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 NetworkManager[56278]: <info>  [1764629866.8441] device (tap513808ab-c0): carrier: link connected
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.855 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ebcc3918-b21f-4e08-b131-e998911ed224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.880 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c9acd358-0686-4b4f-b9a9-e8bbb19d218d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap513808ab-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537641, 'reachable_time': 30370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252589, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.901 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[5c05774e-c582-4b59-a924-0025f93ae87e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:c16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537641, 'tstamp': 537641}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252590, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.919 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[156e485c-af76-4252-8a41-2ae3bd033dac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap513808ab-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537641, 'reachable_time': 30370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252591, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:46.962 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[05d399c3-3677-4a50-9589-f99fcf546d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.045 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ef076e30-99ec-4145-b631-1786eaf03178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.046 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513808ab-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.047 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.047 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap513808ab-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:47 compute-0 NetworkManager[56278]: <info>  [1764629867.0511] manager: (tap513808ab-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.050 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:47 compute-0 kernel: tap513808ab-c0: entered promiscuous mode
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.056 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.057 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap513808ab-c0, col_values=(('external_ids', {'iface-id': 'c21d900e-9830-49c7-a1df-ef9de7493e3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:47 compute-0 ovn_controller[97770]: 2025-12-01T22:57:47Z|00108|binding|INFO|Releasing lport c21d900e-9830-49c7-a1df-ef9de7493e3f from this chassis (sb_readonly=0)
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.059 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.083 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/513808ab-c863-4790-88e3-b64040a0ed8a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/513808ab-c863-4790-88e3-b64040a0ed8a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.082 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.084 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0761bd6d-798a-4b8c-a13a-d165a361649c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.086 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-513808ab-c863-4790-88e3-b64040a0ed8a
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/513808ab-c863-4790-88e3-b64040a0ed8a.pid.haproxy
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID 513808ab-c863-4790-88e3-b64040a0ed8a
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:57:47 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:47.086 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'env', 'PROCESS_TAG=haproxy-513808ab-c863-4790-88e3-b64040a0ed8a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/513808ab-c863-4790-88e3-b64040a0ed8a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.245 189512 DEBUG nova.objects.instance [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lazy-loading 'flavor' on Instance uuid 691446f5-d3d8-4a4f-a161-f2058a04a59d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.246 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.292 189512 DEBUG oslo_concurrency.lockutils [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.293 189512 DEBUG oslo_concurrency.lockutils [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.386 189512 DEBUG nova.network.neutron [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updated VIF entry in instance network info cache for port 02f1eac6-306c-4fa9-82c7-6e9082828c65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.387 189512 DEBUG nova.network.neutron [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updating instance_info_cache with network_info: [{"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.419 189512 DEBUG oslo_concurrency.lockutils [req-6e115a5a-92a6-4579-8aeb-b7f824cc52a3 req-c3617939-00a1-46cb-a24a-c8b99e3bab88 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.474 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629867.4734213, 6a2b0a2e-1144-4264-917f-086024e18bed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.475 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] VM Started (Lifecycle Event)#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.495 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.504 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629867.4734898, 6a2b0a2e-1144-4264-917f-086024e18bed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.505 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.524 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.529 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:57:47 compute-0 podman[252629]: 2025-12-01 22:57:47.551688635 +0000 UTC m=+0.075905883 container create 38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 22:57:47 compute-0 nova_compute[189508]: 2025-12-01 22:57:47.562 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:57:47 compute-0 systemd[1]: Started libpod-conmon-38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b.scope.
Dec  1 22:57:47 compute-0 podman[252629]: 2025-12-01 22:57:47.514631455 +0000 UTC m=+0.038848723 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:57:47 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aca7c958599cc980fc6c70c600d7cad8601c121aa41fd579c74787664142bab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:57:47 compute-0 podman[252629]: 2025-12-01 22:57:47.648204922 +0000 UTC m=+0.172422210 container init 38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 22:57:47 compute-0 podman[252629]: 2025-12-01 22:57:47.65765693 +0000 UTC m=+0.181874198 container start 38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 22:57:47 compute-0 neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a[252643]: [NOTICE]   (252647) : New worker (252649) forked
Dec  1 22:57:47 compute-0 neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a[252643]: [NOTICE]   (252647) : Loading success.
Dec  1 22:57:49 compute-0 nova_compute[189508]: 2025-12-01 22:57:49.493 189512 DEBUG nova.network.neutron [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:57:49 compute-0 nova_compute[189508]: 2025-12-01 22:57:49.639 189512 DEBUG nova.compute.manager [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:49 compute-0 nova_compute[189508]: 2025-12-01 22:57:49.639 189512 DEBUG nova.compute.manager [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing instance network info cache due to event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:57:49 compute-0 nova_compute[189508]: 2025-12-01 22:57:49.640 189512 DEBUG oslo_concurrency.lockutils [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:49 compute-0 nova_compute[189508]: 2025-12-01 22:57:49.665 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:57:50 compute-0 nova_compute[189508]: 2025-12-01 22:57:50.101 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:50 compute-0 nova_compute[189508]: 2025-12-01 22:57:50.246 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:50 compute-0 nova_compute[189508]: 2025-12-01 22:57:50.538 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:50 compute-0 nova_compute[189508]: 2025-12-01 22:57:50.587 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:52 compute-0 nova_compute[189508]: 2025-12-01 22:57:52.307 189512 DEBUG nova.network.neutron [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:52 compute-0 nova_compute[189508]: 2025-12-01 22:57:52.354 189512 DEBUG oslo_concurrency.lockutils [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:52 compute-0 nova_compute[189508]: 2025-12-01 22:57:52.356 189512 DEBUG nova.compute.manager [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  1 22:57:52 compute-0 nova_compute[189508]: 2025-12-01 22:57:52.357 189512 DEBUG nova.compute.manager [None req-c4927163-39f4-4460-8ef4-85c26cee1941 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] network_info to inject: |[{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  1 22:57:52 compute-0 nova_compute[189508]: 2025-12-01 22:57:52.362 189512 DEBUG oslo_concurrency.lockutils [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:52 compute-0 nova_compute[189508]: 2025-12-01 22:57:52.363 189512 DEBUG nova.network.neutron [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing network info cache for port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:57:53 compute-0 nova_compute[189508]: 2025-12-01 22:57:53.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:57:53 compute-0 nova_compute[189508]: 2025-12-01 22:57:53.493 189512 DEBUG nova.objects.instance [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lazy-loading 'flavor' on Instance uuid 691446f5-d3d8-4a4f-a161-f2058a04a59d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:53 compute-0 nova_compute[189508]: 2025-12-01 22:57:53.524 189512 DEBUG oslo_concurrency.lockutils [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:54 compute-0 nova_compute[189508]: 2025-12-01 22:57:54.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:57:54 compute-0 nova_compute[189508]: 2025-12-01 22:57:54.498 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:54 compute-0 nova_compute[189508]: 2025-12-01 22:57:54.946 189512 DEBUG nova.network.neutron [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updated VIF entry in instance network info cache for port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:57:54 compute-0 nova_compute[189508]: 2025-12-01 22:57:54.947 189512 DEBUG nova.network.neutron [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:54 compute-0 nova_compute[189508]: 2025-12-01 22:57:54.963 189512 DEBUG oslo_concurrency.lockutils [req-667aee53-3f55-4f51-a6a3-0d1049fcfbfd req-7b092a58-a374-40f9-8443-c087f6cbbe63 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:54 compute-0 nova_compute[189508]: 2025-12-01 22:57:54.965 189512 DEBUG oslo_concurrency.lockutils [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.089 189512 DEBUG nova.compute.manager [req-047778ab-00e8-49c4-8ad7-bf8f0c2e39b1 req-b81160dc-8b49-4a48-b735-ea445ffe6900 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.091 189512 DEBUG oslo_concurrency.lockutils [req-047778ab-00e8-49c4-8ad7-bf8f0c2e39b1 req-b81160dc-8b49-4a48-b735-ea445ffe6900 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.091 189512 DEBUG oslo_concurrency.lockutils [req-047778ab-00e8-49c4-8ad7-bf8f0c2e39b1 req-b81160dc-8b49-4a48-b735-ea445ffe6900 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.092 189512 DEBUG oslo_concurrency.lockutils [req-047778ab-00e8-49c4-8ad7-bf8f0c2e39b1 req-b81160dc-8b49-4a48-b735-ea445ffe6900 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.093 189512 DEBUG nova.compute.manager [req-047778ab-00e8-49c4-8ad7-bf8f0c2e39b1 req-b81160dc-8b49-4a48-b735-ea445ffe6900 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Processing event network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.094 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.100 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629875.0998435, 6a2b0a2e-1144-4264-917f-086024e18bed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.100 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.102 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.109 189512 INFO nova.virt.libvirt.driver [-] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Instance spawned successfully.#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.109 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.145 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.157 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.159 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.160 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.161 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.163 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.164 189512 DEBUG nova.virt.libvirt.driver [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.178 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.209 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.240 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.257 189512 INFO nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Took 16.65 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.258 189512 DEBUG nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.345 189512 INFO nova.compute.manager [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Took 17.36 seconds to build instance.#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.363 189512 DEBUG oslo_concurrency.lockutils [None req-20c37e61-fc13-4644-adc9-91d244ab3392 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.518 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.541 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:55 compute-0 nova_compute[189508]: 2025-12-01 22:57:55.590 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:56 compute-0 nova_compute[189508]: 2025-12-01 22:57:56.732 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:56 compute-0 nova_compute[189508]: 2025-12-01 22:57:56.733 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:56 compute-0 nova_compute[189508]: 2025-12-01 22:57:56.764 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:57:56 compute-0 nova_compute[189508]: 2025-12-01 22:57:56.977 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:56 compute-0 nova_compute[189508]: 2025-12-01 22:57:56.978 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:56 compute-0 nova_compute[189508]: 2025-12-01 22:57:56.990 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:57:56 compute-0 nova_compute[189508]: 2025-12-01 22:57:56.991 189512 INFO nova.compute.claims [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.124 189512 DEBUG nova.scheduler.client.report [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.129 189512 DEBUG nova.network.neutron [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.198 189512 DEBUG nova.compute.manager [req-0b080b91-6579-443e-bd3c-9fd5217589a1 req-3d4ced20-3b1e-48e4-84da-54f747c5ea46 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.199 189512 DEBUG oslo_concurrency.lockutils [req-0b080b91-6579-443e-bd3c-9fd5217589a1 req-3d4ced20-3b1e-48e4-84da-54f747c5ea46 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.199 189512 DEBUG oslo_concurrency.lockutils [req-0b080b91-6579-443e-bd3c-9fd5217589a1 req-3d4ced20-3b1e-48e4-84da-54f747c5ea46 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.200 189512 DEBUG oslo_concurrency.lockutils [req-0b080b91-6579-443e-bd3c-9fd5217589a1 req-3d4ced20-3b1e-48e4-84da-54f747c5ea46 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.200 189512 DEBUG nova.compute.manager [req-0b080b91-6579-443e-bd3c-9fd5217589a1 req-3d4ced20-3b1e-48e4-84da-54f747c5ea46 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] No waiting events found dispatching network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.201 189512 WARNING nova.compute.manager [req-0b080b91-6579-443e-bd3c-9fd5217589a1 req-3d4ced20-3b1e-48e4-84da-54f747c5ea46 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received unexpected event network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.202 189512 DEBUG nova.scheduler.client.report [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.203 189512 DEBUG nova.compute.provider_tree [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.222 189512 DEBUG nova.scheduler.client.report [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.253 189512 DEBUG nova.scheduler.client.report [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.327 189512 DEBUG nova.compute.provider_tree [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.507 189512 DEBUG nova.scheduler.client.report [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.569 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.570 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.641 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.643 189512 DEBUG nova.network.neutron [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.694 189512 INFO nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.712 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.846 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.848 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.849 189512 INFO nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Creating image(s)#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.850 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.850 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.851 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.864 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.896 189512 DEBUG nova.policy [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f27393706a734cf3bee31de08a363c23', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'faa4919c58ee4a458bdb25fd4271bfde', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.938 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.939 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.940 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:57 compute-0 nova_compute[189508]: 2025-12-01 22:57:57.956 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.049 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.051 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.119 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk 1073741824" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.120 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.121 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.216 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.218 189512 DEBUG nova.virt.disk.api [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Checking if we can resize image /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.219 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.283 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.284 189512 DEBUG nova.virt.disk.api [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Cannot resize image /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.285 189512 DEBUG nova.objects.instance [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'migration_context' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.327 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.329 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Ensure instance console log exists: /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.330 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.330 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.331 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:58 compute-0 nova_compute[189508]: 2025-12-01 22:57:58.390 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.117 189512 DEBUG nova.network.neutron [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.157 189512 DEBUG oslo_concurrency.lockutils [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.160 189512 DEBUG nova.compute.manager [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.160 189512 DEBUG nova.compute.manager [None req-130d20b3-70a2-4c9c-9aac-d648fc746242 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] network_info to inject: |[{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.164 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.165 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.166 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 691446f5-d3d8-4a4f-a161-f2058a04a59d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.364 189512 DEBUG nova.compute.manager [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.366 189512 DEBUG nova.compute.manager [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing instance network info cache due to event network-changed-2c9e194a-9ee9-406f-8afb-aba53adbc9d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.367 189512 DEBUG oslo_concurrency.lockutils [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.532 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "691446f5-d3d8-4a4f-a161-f2058a04a59d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.535 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.536 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.538 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.539 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.542 189512 INFO nova.compute.manager [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Terminating instance#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.546 189512 DEBUG nova.compute.manager [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:57:59 compute-0 kernel: tap2c9e194a-9e (unregistering): left promiscuous mode
Dec  1 22:57:59 compute-0 NetworkManager[56278]: <info>  [1764629879.5901] device (tap2c9e194a-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.619 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 ovn_controller[97770]: 2025-12-01T22:57:59Z|00109|binding|INFO|Releasing lport 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 from this chassis (sb_readonly=0)
Dec  1 22:57:59 compute-0 ovn_controller[97770]: 2025-12-01T22:57:59Z|00110|binding|INFO|Setting lport 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 down in Southbound
Dec  1 22:57:59 compute-0 ovn_controller[97770]: 2025-12-01T22:57:59Z|00111|binding|INFO|Removing iface tap2c9e194a-9e ovn-installed in OVS
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.624 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:59.630 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:0a:ea 10.100.0.11'], port_security=['fa:16:3e:ad:0a:ea 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '691446f5-d3d8-4a4f-a161-f2058a04a59d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d90832-bbf5-4d6e-98bd-38064caad349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5dde91941cac4081b671670d9a874621', 'neutron:revision_number': '6', 'neutron:security_group_ids': '544b5cb0-fe7d-410d-9d36-89c1d5ce3010', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca374238-9b29-4fbb-8971-048cd0a5e9c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=2c9e194a-9ee9-406f-8afb-aba53adbc9d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:57:59 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:59.631 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 in datapath 51d90832-bbf5-4d6e-98bd-38064caad349 unbound from our chassis#033[00m
Dec  1 22:57:59 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:59.634 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51d90832-bbf5-4d6e-98bd-38064caad349, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:57:59 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:59.635 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a148b6a8-9b77-4af8-9b6c-78ecf0724568]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:57:59 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:57:59.636 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349 namespace which is not needed anymore#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.644 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  1 22:57:59 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 43.346s CPU time.
Dec  1 22:57:59 compute-0 systemd-machined[155759]: Machine qemu-7-instance-00000007 terminated.
Dec  1 22:57:59 compute-0 podman[252684]: 2025-12-01 22:57:59.726667502 +0000 UTC m=+0.099258685 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 22:57:59 compute-0 podman[203693]: time="2025-12-01T22:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:57:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Dec  1 22:57:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5274 "" "Go-http-client/1.1"
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.780 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.790 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [NOTICE]   (251449) : haproxy version is 2.8.14-c23fe91
Dec  1 22:57:59 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [NOTICE]   (251449) : path to executable is /usr/sbin/haproxy
Dec  1 22:57:59 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [WARNING]  (251449) : Exiting Master process...
Dec  1 22:57:59 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [WARNING]  (251449) : Exiting Master process...
Dec  1 22:57:59 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [ALERT]    (251449) : Current worker (251451) exited with code 143 (Terminated)
Dec  1 22:57:59 compute-0 neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349[251445]: [WARNING]  (251449) : All workers exited. Exiting... (0)
Dec  1 22:57:59 compute-0 systemd[1]: libpod-b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8.scope: Deactivated successfully.
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.834 189512 INFO nova.virt.libvirt.driver [-] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Instance destroyed successfully.#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.835 189512 DEBUG nova.objects.instance [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lazy-loading 'resources' on Instance uuid 691446f5-d3d8-4a4f-a161-f2058a04a59d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:57:59 compute-0 podman[252731]: 2025-12-01 22:57:59.842588779 +0000 UTC m=+0.105748989 container died b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.871 189512 DEBUG nova.virt.libvirt.vif [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:56:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-871685025',display_name='tempest-AttachInterfacesUnderV243Test-server-871685025',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-871685025',id=7,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUwdv+NY00dZ4Qak5VAhJonHJDg3QW/4qrZXWUPft55hAyY+K9JJ/IZy3JiB2DL4dT9YRZ4HS2lUokEK1+MWo4Kffjap+PoFdLJkWZvU88eiaYZMJygvq2Y3gk5LCAb/A==',key_name='tempest-keypair-1770308231',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:56:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5dde91941cac4081b671670d9a874621',ramdisk_id='',reservation_id='r-pp070lnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1494013272',owner_user_name='tempest-AttachInterfacesUnderV243Test-1494013272-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:57:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9177a32b390447b1acbb338fbf90b4bc',uuid=691446f5-d3d8-4a4f-a161-f2058a04a59d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.871 189512 DEBUG nova.network.os_vif_util [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Converting VIF {"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.873 189512 DEBUG nova.network.os_vif_util [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:0a:ea,bridge_name='br-int',has_traffic_filtering=True,id=2c9e194a-9ee9-406f-8afb-aba53adbc9d7,network=Network(51d90832-bbf5-4d6e-98bd-38064caad349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9e194a-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.874 189512 DEBUG os_vif [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:0a:ea,bridge_name='br-int',has_traffic_filtering=True,id=2c9e194a-9ee9-406f-8afb-aba53adbc9d7,network=Network(51d90832-bbf5-4d6e-98bd-38064caad349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9e194a-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8-userdata-shm.mount: Deactivated successfully.
Dec  1 22:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-42911f31bf2d8136a681c844dcf48bdb9d5c184beba69980316e149b96b55c7a-merged.mount: Deactivated successfully.
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.881 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.883 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c9e194a-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.888 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 podman[252731]: 2025-12-01 22:57:59.890475707 +0000 UTC m=+0.153635907 container cleanup b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.891 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.892 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.895 189512 INFO os_vif [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:0a:ea,bridge_name='br-int',has_traffic_filtering=True,id=2c9e194a-9ee9-406f-8afb-aba53adbc9d7,network=Network(51d90832-bbf5-4d6e-98bd-38064caad349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c9e194a-9e')#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.896 189512 INFO nova.virt.libvirt.driver [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Deleting instance files /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d_del#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.897 189512 INFO nova.virt.libvirt.driver [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Deletion of /var/lib/nova/instances/691446f5-d3d8-4a4f-a161-f2058a04a59d_del complete#033[00m
Dec  1 22:57:59 compute-0 systemd[1]: libpod-conmon-b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8.scope: Deactivated successfully.
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.996 189512 INFO nova.compute.manager [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.997 189512 DEBUG oslo.service.loopingcall [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:57:59 compute-0 nova_compute[189508]: 2025-12-01 22:57:59.999 189512 DEBUG nova.compute.manager [-] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:57:59 compute-0 podman[252772]: 2025-12-01 22:57:59.997590844 +0000 UTC m=+0.072902438 container remove b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.001 189512 DEBUG nova.network.neutron [-] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.015 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[cf72c181-27a6-4338-8a10-b8e79e3345b2]: (4, ('Mon Dec  1 10:57:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349 (b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8)\nb597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8\nMon Dec  1 10:57:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349 (b597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8)\nb597812cd085860e933e9b3c6896e753687ad314b222b90bbeeaa64d60420cb8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.017 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[efb7b4e5-cb0e-482d-a1a0-f45d8479bf6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.018 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d90832-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.020 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:00 compute-0 kernel: tap51d90832-b0: left promiscuous mode
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.026 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[7a703cba-8780-4453-b8f2-39fcbedf8024]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.046 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.048 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[fa123f1e-c146-4d43-9c27-60367d83cb95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.051 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[db5612d2-76b0-4540-947e-6e3202ad0d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.070 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[7e987b84-2f93-4f7a-ab49-022653545c6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528859, 'reachable_time': 25321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252787, 'error': None, 'target': 'ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d51d90832\x2dbbf5\x2d4d6e\x2d98bd\x2d38064caad349.mount: Deactivated successfully.
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.074 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51d90832-bbf5-4d6e-98bd-38064caad349 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:58:00 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:00.074 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[3d166b07-f437-4486-a0ec-d788c74457d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.260 189512 DEBUG nova.network.neutron [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Successfully created port: a139ed27-b785-495f-bc93-2f5daea46d42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.530 189512 DEBUG nova.compute.manager [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-changed-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.530 189512 DEBUG nova.compute.manager [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Refreshing instance network info cache due to event network-changed-02f1eac6-306c-4fa9-82c7-6e9082828c65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.531 189512 DEBUG oslo_concurrency.lockutils [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.531 189512 DEBUG oslo_concurrency.lockutils [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.531 189512 DEBUG nova.network.neutron [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Refreshing network info cache for port 02f1eac6-306c-4fa9-82c7-6e9082828c65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.546 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:00 compute-0 nova_compute[189508]: 2025-12-01 22:58:00.764 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.301 189512 DEBUG nova.network.neutron [-] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.321 189512 INFO nova.compute.manager [-] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Took 1.32 seconds to deallocate network for instance.#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.385 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.387 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:01 compute-0 openstack_network_exporter[205887]: ERROR   22:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:58:01 compute-0 openstack_network_exporter[205887]: ERROR   22:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:58:01 compute-0 openstack_network_exporter[205887]: ERROR   22:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:58:01 compute-0 openstack_network_exporter[205887]: ERROR   22:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:58:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:58:01 compute-0 openstack_network_exporter[205887]: ERROR   22:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:58:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.497 189512 DEBUG nova.compute.manager [req-e0c33dd5-e2f3-44bb-82e7-2ec4a7215455 req-5f75ca91-7649-4a18-96e3-9cff2f8a5e7e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Received event network-vif-deleted-2c9e194a-9ee9-406f-8afb-aba53adbc9d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.522 189512 DEBUG nova.compute.provider_tree [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.545 189512 DEBUG nova.scheduler.client.report [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.720 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.764 189512 INFO nova.scheduler.client.report [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Deleted allocations for instance 691446f5-d3d8-4a4f-a161-f2058a04a59d#033[00m
Dec  1 22:58:01 compute-0 nova_compute[189508]: 2025-12-01 22:58:01.844 189512 DEBUG oslo_concurrency.lockutils [None req-a2a07cb5-26b6-43fc-80ee-f6cf0ad62d16 9177a32b390447b1acbb338fbf90b4bc 5dde91941cac4081b671670d9a874621 - - default default] Lock "691446f5-d3d8-4a4f-a161-f2058a04a59d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.219 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [{"id": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "address": "fa:16:3e:ad:0a:ea", "network": {"id": "51d90832-bbf5-4d6e-98bd-38064caad349", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1252852700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5dde91941cac4081b671670d9a874621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c9e194a-9e", "ovs_interfaceid": "2c9e194a-9ee9-406f-8afb-aba53adbc9d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.254 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.255 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.256 189512 DEBUG oslo_concurrency.lockutils [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.257 189512 DEBUG nova.network.neutron [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Refreshing network info cache for port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.260 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.262 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.263 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.263 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.264 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.265 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.278 189512 DEBUG nova.compute.utils [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.291 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.292 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.293 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.293 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.398 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.478 189512 INFO nova.network.neutron [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Port 2c9e194a-9ee9-406f-8afb-aba53adbc9d7 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.478 189512 DEBUG nova.network.neutron [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.480 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.481 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.500 189512 DEBUG oslo_concurrency.lockutils [req-a0f2393e-8e92-4e60-a6af-3d5e5a3928de req-06e32a36-50c2-472c-bf18-181490f48c31 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-691446f5-d3d8-4a4f-a161-f2058a04a59d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:02 compute-0 podman[252789]: 2025-12-01 22:58:02.499074743 +0000 UTC m=+0.120437156 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.549 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.811 189512 DEBUG nova.network.neutron [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Successfully updated port: a139ed27-b785-495f-bc93-2f5daea46d42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.828 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.828 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquired lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.829 189512 DEBUG nova.network.neutron [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.869 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.871 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=72.15763473510742GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.871 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.872 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.988 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 6a2b0a2e-1144-4264-917f-086024e18bed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.988 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 4d450663-4303-4535-bc1a-72996000c25a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.989 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:58:02 compute-0 nova_compute[189508]: 2025-12-01 22:58:02.989 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.013 189512 DEBUG nova.compute.manager [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-changed-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.013 189512 DEBUG nova.compute.manager [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Refreshing instance network info cache due to event network-changed-a139ed27-b785-495f-bc93-2f5daea46d42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.013 189512 DEBUG oslo_concurrency.lockutils [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.091 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.107 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.135 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.136 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.137 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.273 189512 DEBUG nova.network.neutron [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updated VIF entry in instance network info cache for port 02f1eac6-306c-4fa9-82c7-6e9082828c65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.274 189512 DEBUG nova.network.neutron [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updating instance_info_cache with network_info: [{"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.346 189512 DEBUG oslo_concurrency.lockutils [req-7e75d846-edc7-4c9e-9758-7a62763d64b6 req-3077d2c7-c7e4-4c4c-9347-947ea3bdb308 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:03 compute-0 nova_compute[189508]: 2025-12-01 22:58:03.374 189512 DEBUG nova.network.neutron [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:58:03 compute-0 podman[252815]: 2025-12-01 22:58:03.864662525 +0000 UTC m=+0.138825908 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:58:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:04.640 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:04.641 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:04.641 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:04 compute-0 nova_compute[189508]: 2025-12-01 22:58:04.893 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.545 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.777 189512 DEBUG nova.network.neutron [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updating instance_info_cache with network_info: [{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.799 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Releasing lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.800 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance network_info: |[{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.801 189512 DEBUG oslo_concurrency.lockutils [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.802 189512 DEBUG nova.network.neutron [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Refreshing network info cache for port a139ed27-b785-495f-bc93-2f5daea46d42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.806 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Start _get_guest_xml network_info=[{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.817 189512 WARNING nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.824 189512 DEBUG nova.virt.libvirt.host [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.825 189512 DEBUG nova.virt.libvirt.host [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.840 189512 DEBUG nova.virt.libvirt.host [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.842 189512 DEBUG nova.virt.libvirt.host [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.843 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.843 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.844 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.845 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.845 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.845 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.846 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.846 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.847 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.847 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.848 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.849 189512 DEBUG nova.virt.hardware [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.854 189512 DEBUG nova.virt.libvirt.vif [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2091090341',display_name='tempest-ServerActionsTestJSON-server-2091090341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2091090341',id=11,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+fzJbRUs6xTpBTH6qdTI6/Z5W+mGfJgDYfAUhpF05jRUFQOpZmqCMJhmfo4TTDAEYfG1aq/+blNkmuIybaiFy/eDEp+yVFf0iSiXkStUapi+PgaOcCydfsaALgr/g66Q==',key_name='tempest-keypair-87244995',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa4919c58ee4a458bdb25fd4271bfde',ramdisk_id='',reservation_id='r-lf97gff3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1483688623',owner_user_name='tempest-ServerActionsTestJSON-1483688623-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:57:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f27393706a734cf3bee31de08a363c23',uuid=4d450663-4303-4535-bc1a-72996000c25a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.854 189512 DEBUG nova.network.os_vif_util [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converting VIF {"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.856 189512 DEBUG nova.network.os_vif_util [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.857 189512 DEBUG nova.objects.instance [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.876 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <uuid>4d450663-4303-4535-bc1a-72996000c25a</uuid>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <name>instance-0000000b</name>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <nova:name>tempest-ServerActionsTestJSON-server-2091090341</nova:name>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:58:05</nova:creationTime>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:user uuid="f27393706a734cf3bee31de08a363c23">tempest-ServerActionsTestJSON-1483688623-project-member</nova:user>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:project uuid="faa4919c58ee4a458bdb25fd4271bfde">tempest-ServerActionsTestJSON-1483688623</nova:project>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        <nova:port uuid="a139ed27-b785-495f-bc93-2f5daea46d42">
Dec  1 22:58:05 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <system>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <entry name="serial">4d450663-4303-4535-bc1a-72996000c25a</entry>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <entry name="uuid">4d450663-4303-4535-bc1a-72996000c25a</entry>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </system>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <os>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  </os>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <features>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  </features>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.config"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:b8:3e:a0"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <target dev="tapa139ed27-b7"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/console.log" append="off"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <video>
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </video>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:58:05 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:58:05 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:58:05 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:58:05 compute-0 nova_compute[189508]: </domain>
Dec  1 22:58:05 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.887 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Preparing to wait for external event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.887 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.888 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.888 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.889 189512 DEBUG nova.virt.libvirt.vif [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2091090341',display_name='tempest-ServerActionsTestJSON-server-2091090341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2091090341',id=11,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+fzJbRUs6xTpBTH6qdTI6/Z5W+mGfJgDYfAUhpF05jRUFQOpZmqCMJhmfo4TTDAEYfG1aq/+blNkmuIybaiFy/eDEp+yVFf0iSiXkStUapi+PgaOcCydfsaALgr/g66Q==',key_name='tempest-keypair-87244995',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faa4919c58ee4a458bdb25fd4271bfde',ramdisk_id='',reservation_id='r-lf97gff3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1483688623',owner_user_name='tempest-ServerActionsTestJSON-1483688623-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:57:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f27393706a734cf3bee31de08a363c23',uuid=4d450663-4303-4535-bc1a-72996000c25a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.889 189512 DEBUG nova.network.os_vif_util [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converting VIF {"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.890 189512 DEBUG nova.network.os_vif_util [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.890 189512 DEBUG os_vif [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.891 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.891 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.892 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.897 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.898 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa139ed27-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.898 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa139ed27-b7, col_values=(('external_ids', {'iface-id': 'a139ed27-b785-495f-bc93-2f5daea46d42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:3e:a0', 'vm-uuid': '4d450663-4303-4535-bc1a-72996000c25a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.900 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.901 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:58:05 compute-0 NetworkManager[56278]: <info>  [1764629885.9026] manager: (tapa139ed27-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.909 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.910 189512 INFO os_vif [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7')#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.996 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.997 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.997 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] No VIF found with MAC fa:16:3e:b8:3e:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:58:05 compute-0 nova_compute[189508]: 2025-12-01 22:58:05.998 189512 INFO nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Using config drive#033[00m
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.494 189512 INFO nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Creating config drive at /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.config#033[00m
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.500 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx5_sfbj0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.637 189512 DEBUG oslo_concurrency.processutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx5_sfbj0" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:06 compute-0 kernel: tapa139ed27-b7: entered promiscuous mode
Dec  1 22:58:06 compute-0 ovn_controller[97770]: 2025-12-01T22:58:06Z|00112|binding|INFO|Claiming lport a139ed27-b785-495f-bc93-2f5daea46d42 for this chassis.
Dec  1 22:58:06 compute-0 ovn_controller[97770]: 2025-12-01T22:58:06Z|00113|binding|INFO|a139ed27-b785-495f-bc93-2f5daea46d42: Claiming fa:16:3e:b8:3e:a0 10.100.0.6
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.715 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:06 compute-0 NetworkManager[56278]: <info>  [1764629886.7220] manager: (tapa139ed27-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.732 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:3e:a0 10.100.0.6'], port_security=['fa:16:3e:b8:3e:a0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4d450663-4303-4535-bc1a-72996000c25a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c3d0516-109b-46fb-ab67-19206f614258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa4919c58ee4a458bdb25fd4271bfde', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd06e5c87-dfe8-4629-aafa-87299e309e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebd388b8-c29a-49dc-9a3f-96f8cde4cd01, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=a139ed27-b785-495f-bc93-2f5daea46d42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.733 106662 INFO neutron.agent.ovn.metadata.agent [-] Port a139ed27-b785-495f-bc93-2f5daea46d42 in datapath 7c3d0516-109b-46fb-ab67-19206f614258 bound to our chassis#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.735 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c3d0516-109b-46fb-ab67-19206f614258#033[00m
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.738 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:06 compute-0 ovn_controller[97770]: 2025-12-01T22:58:06Z|00114|binding|INFO|Setting lport a139ed27-b785-495f-bc93-2f5daea46d42 ovn-installed in OVS
Dec  1 22:58:06 compute-0 ovn_controller[97770]: 2025-12-01T22:58:06Z|00115|binding|INFO|Setting lport a139ed27-b785-495f-bc93-2f5daea46d42 up in Southbound
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.744 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.747 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:06 compute-0 nova_compute[189508]: 2025-12-01 22:58:06.758 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.758 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a980c8b9-3c3f-4416-b575-145233a617e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.759 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c3d0516-11 in ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.760 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c3d0516-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.761 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[15e89c5e-0420-471d-8b64-148e141c1ea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.761 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1f9687-8acd-4cad-819d-296db35e265c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.776 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[097e2a69-982c-42d3-8b97-8ada9ed2336b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 systemd-machined[155759]: New machine qemu-11-instance-0000000b.
Dec  1 22:58:06 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.809 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f6985610-fc30-4928-aecb-90e892e6e6a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 systemd-udevd[252858]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:58:06 compute-0 NetworkManager[56278]: <info>  [1764629886.8313] device (tapa139ed27-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:58:06 compute-0 NetworkManager[56278]: <info>  [1764629886.8325] device (tapa139ed27-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.845 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[6085f062-e975-42e2-855b-e4cc14149d0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 systemd-udevd[252862]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:58:06 compute-0 NetworkManager[56278]: <info>  [1764629886.8536] manager: (tap7c3d0516-10): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.852 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[513fab90-2b83-4b80-9a93-d30193529092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.886 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[71bc54a9-b7ef-4ae9-b88a-e71e34e23aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.889 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[049d82b3-0977-49da-847a-103b17a25116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 NetworkManager[56278]: <info>  [1764629886.9181] device (tap7c3d0516-10): carrier: link connected
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.927 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[85cee0a1-efce-4711-bf21-72623d9218f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.956 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd8c562-8eae-4ac2-ad5a-4f6c99714219]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c3d0516-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:2b:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539648, 'reachable_time': 15045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252888, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.975 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7cd001-fd48-4b54-b5c8-2f1ce322e633]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:2bc5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539648, 'tstamp': 539648}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252889, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:06 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:06.991 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e13a4357-232b-4643-93de-6bbbe7df74be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c3d0516-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:2b:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539648, 'reachable_time': 15045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252890, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.025 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ad9bae-d9c8-4e6c-9403-99a663ad3bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.069 189512 DEBUG nova.compute.manager [req-4d15c969-e370-4795-a246-7c29834bb6a5 req-7e2e2cda-2331-4582-9c64-b7c00cf28640 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.070 189512 DEBUG oslo_concurrency.lockutils [req-4d15c969-e370-4795-a246-7c29834bb6a5 req-7e2e2cda-2331-4582-9c64-b7c00cf28640 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.070 189512 DEBUG oslo_concurrency.lockutils [req-4d15c969-e370-4795-a246-7c29834bb6a5 req-7e2e2cda-2331-4582-9c64-b7c00cf28640 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.070 189512 DEBUG oslo_concurrency.lockutils [req-4d15c969-e370-4795-a246-7c29834bb6a5 req-7e2e2cda-2331-4582-9c64-b7c00cf28640 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.071 189512 DEBUG nova.compute.manager [req-4d15c969-e370-4795-a246-7c29834bb6a5 req-7e2e2cda-2331-4582-9c64-b7c00cf28640 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Processing event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.094 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a24d36-4747-4f86-85b6-9ac6ab35b3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.095 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c3d0516-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.095 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.096 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c3d0516-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:07 compute-0 kernel: tap7c3d0516-10: entered promiscuous mode
Dec  1 22:58:07 compute-0 NetworkManager[56278]: <info>  [1764629887.0985] manager: (tap7c3d0516-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.100 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.106 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c3d0516-10, col_values=(('external_ids', {'iface-id': '59cd1803-8a52-4381-bb39-d2aa1220acc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:07 compute-0 ovn_controller[97770]: 2025-12-01T22:58:07Z|00116|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.108 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.121 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c3d0516-109b-46fb-ab67-19206f614258.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c3d0516-109b-46fb-ab67-19206f614258.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.122 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[074ad877-09b0-4630-9495-816f7891f2e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.123 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-7c3d0516-109b-46fb-ab67-19206f614258
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/7c3d0516-109b-46fb-ab67-19206f614258.pid.haproxy
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID 7c3d0516-109b-46fb-ab67-19206f614258
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:58:07 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:07.123 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'env', 'PROCESS_TAG=haproxy-7c3d0516-109b-46fb-ab67-19206f614258', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c3d0516-109b-46fb-ab67-19206f614258.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.124 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.462 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629887.462434, 4d450663-4303-4535-bc1a-72996000c25a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.464 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] VM Started (Lifecycle Event)#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.466 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.470 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.480 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.481 189512 INFO nova.virt.libvirt.driver [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance spawned successfully.#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.482 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.486 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.515 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.515 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629887.4634492, 4d450663-4303-4535-bc1a-72996000c25a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.516 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.528 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.528 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.529 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.529 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.530 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.531 189512 DEBUG nova.virt.libvirt.driver [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.537 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.542 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629887.4698255, 4d450663-4303-4535-bc1a-72996000c25a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.542 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.564 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.569 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.615 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:58:07 compute-0 podman[252928]: 2025-12-01 22:58:07.61764975 +0000 UTC m=+0.077543850 container create 356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.635 189512 INFO nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Took 9.79 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.636 189512 DEBUG nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:07 compute-0 systemd[1]: Started libpod-conmon-356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126.scope.
Dec  1 22:58:07 compute-0 podman[252928]: 2025-12-01 22:58:07.586447096 +0000 UTC m=+0.046341246 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:58:07 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b1f32ae5fe73becfb1a61c774be9d4163a4bea30877e50defdd0f3200b176b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:58:07 compute-0 podman[252928]: 2025-12-01 22:58:07.742736777 +0000 UTC m=+0.202630917 container init 356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.745 189512 INFO nova.compute.manager [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Took 10.93 seconds to build instance.#033[00m
Dec  1 22:58:07 compute-0 podman[252928]: 2025-12-01 22:58:07.757863896 +0000 UTC m=+0.217758006 container start 356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  1 22:58:07 compute-0 nova_compute[189508]: 2025-12-01 22:58:07.759 189512 DEBUG oslo_concurrency.lockutils [None req-67e63aa3-6068-4bdd-826c-7a2ee36b1011 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:07 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [NOTICE]   (252947) : New worker (252949) forked
Dec  1 22:58:07 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [NOTICE]   (252947) : Loading success.
Dec  1 22:58:08 compute-0 nova_compute[189508]: 2025-12-01 22:58:08.105 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:08 compute-0 nova_compute[189508]: 2025-12-01 22:58:08.165 189512 DEBUG nova.network.neutron [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updated VIF entry in instance network info cache for port a139ed27-b785-495f-bc93-2f5daea46d42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:58:08 compute-0 nova_compute[189508]: 2025-12-01 22:58:08.166 189512 DEBUG nova.network.neutron [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updating instance_info_cache with network_info: [{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:08 compute-0 nova_compute[189508]: 2025-12-01 22:58:08.182 189512 DEBUG oslo_concurrency.lockutils [req-8955f775-d7c0-4a78-b187-bc0bf71e14ff req-a0b6ea9e-6f8e-4727-9f1b-4d0fddda41d5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:08 compute-0 nova_compute[189508]: 2025-12-01 22:58:08.228 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:08 compute-0 nova_compute[189508]: 2025-12-01 22:58:08.229 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.187 189512 DEBUG nova.compute.manager [req-aac02ae7-36f5-473c-b2d8-967d8d0f0109 req-77718c6f-9878-48c7-b014-66d95baeb40d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.189 189512 DEBUG oslo_concurrency.lockutils [req-aac02ae7-36f5-473c-b2d8-967d8d0f0109 req-77718c6f-9878-48c7-b014-66d95baeb40d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.189 189512 DEBUG oslo_concurrency.lockutils [req-aac02ae7-36f5-473c-b2d8-967d8d0f0109 req-77718c6f-9878-48c7-b014-66d95baeb40d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.190 189512 DEBUG oslo_concurrency.lockutils [req-aac02ae7-36f5-473c-b2d8-967d8d0f0109 req-77718c6f-9878-48c7-b014-66d95baeb40d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.190 189512 DEBUG nova.compute.manager [req-aac02ae7-36f5-473c-b2d8-967d8d0f0109 req-77718c6f-9878-48c7-b014-66d95baeb40d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] No waiting events found dispatching network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.190 189512 WARNING nova.compute.manager [req-aac02ae7-36f5-473c-b2d8-967d8d0f0109 req-77718c6f-9878-48c7-b014-66d95baeb40d c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received unexpected event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.227 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.228 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 22:58:09 compute-0 nova_compute[189508]: 2025-12-01 22:58:09.388 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 22:58:09 compute-0 podman[252959]: 2025-12-01 22:58:09.855040189 +0000 UTC m=+0.127823926 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  1 22:58:09 compute-0 podman[252958]: 2025-12-01 22:58:09.873735419 +0000 UTC m=+0.158614759 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 22:58:10 compute-0 ovn_controller[97770]: 2025-12-01T22:58:10Z|00117|memory|INFO|peak resident set size grew 52% in last 2682.0 seconds, from 16000 kB to 24372 kB
Dec  1 22:58:10 compute-0 ovn_controller[97770]: 2025-12-01T22:58:10Z|00118|memory|INFO|idl-cells-OVN_Southbound:11068 idl-cells-Open_vSwitch:813 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:400 lflow-cache-entries-cache-matches:303 lflow-cache-size-KB:1666 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:699 ofctrl_installed_flow_usage-KB:509 ofctrl_sb_flow_ref_usage-KB:264
Dec  1 22:58:10 compute-0 ovn_controller[97770]: 2025-12-01T22:58:10Z|00119|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:58:10 compute-0 ovn_controller[97770]: 2025-12-01T22:58:10Z|00120|binding|INFO|Releasing lport c21d900e-9830-49c7-a1df-ef9de7493e3f from this chassis (sb_readonly=0)
Dec  1 22:58:10 compute-0 nova_compute[189508]: 2025-12-01 22:58:10.470 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:10 compute-0 nova_compute[189508]: 2025-12-01 22:58:10.550 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:10 compute-0 nova_compute[189508]: 2025-12-01 22:58:10.901 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:11 compute-0 nova_compute[189508]: 2025-12-01 22:58:11.301 189512 DEBUG nova.compute.manager [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-changed-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:11 compute-0 nova_compute[189508]: 2025-12-01 22:58:11.302 189512 DEBUG nova.compute.manager [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Refreshing instance network info cache due to event network-changed-a139ed27-b785-495f-bc93-2f5daea46d42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:11 compute-0 nova_compute[189508]: 2025-12-01 22:58:11.302 189512 DEBUG oslo_concurrency.lockutils [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:11 compute-0 nova_compute[189508]: 2025-12-01 22:58:11.302 189512 DEBUG oslo_concurrency.lockutils [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:11 compute-0 nova_compute[189508]: 2025-12-01 22:58:11.302 189512 DEBUG nova.network.neutron [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Refreshing network info cache for port a139ed27-b785-495f-bc93-2f5daea46d42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.087 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.087 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.113 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.239 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.241 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.268 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.270 189512 INFO nova.compute.claims [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.426 189512 DEBUG nova.compute.provider_tree [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.451 189512 DEBUG nova.scheduler.client.report [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.480 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.481 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.547 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.547 189512 DEBUG nova.network.neutron [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.564 189512 INFO nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.599 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.709 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.710 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.710 189512 INFO nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Creating image(s)#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.711 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "/var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.711 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "/var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.712 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "/var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.725 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.785 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.786 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.787 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.798 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.852 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.853 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.899 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.901 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.901 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.959 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.961 189512 DEBUG nova.virt.disk.api [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Checking if we can resize image /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:58:12 compute-0 nova_compute[189508]: 2025-12-01 22:58:12.961 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.020 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.022 189512 DEBUG nova.virt.disk.api [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Cannot resize image /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.022 189512 DEBUG nova.objects.instance [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lazy-loading 'migration_context' on Instance uuid d35b993a-ba2a-478d-b7f6-c7dfba36d402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.055 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.056 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Ensure instance console log exists: /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.057 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.057 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.058 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:13 compute-0 nova_compute[189508]: 2025-12-01 22:58:13.529 189512 DEBUG nova.policy [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '376b22ff1d4b4216a3013dc170064403', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d415954cbc84272b9bc26d3d8a3a591', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:58:14 compute-0 nova_compute[189508]: 2025-12-01 22:58:14.562 189512 DEBUG nova.network.neutron [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updated VIF entry in instance network info cache for port a139ed27-b785-495f-bc93-2f5daea46d42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:58:14 compute-0 nova_compute[189508]: 2025-12-01 22:58:14.563 189512 DEBUG nova.network.neutron [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updating instance_info_cache with network_info: [{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:14 compute-0 nova_compute[189508]: 2025-12-01 22:58:14.583 189512 DEBUG oslo_concurrency.lockutils [req-71b2a059-18f7-4792-b375-dec87eaf02e4 req-c2c97031-27f0-4c51-8f57-816703c45cb6 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:14 compute-0 podman[253013]: 2025-12-01 22:58:14.786676695 +0000 UTC m=+0.113335874 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:58:14 compute-0 podman[253015]: 2025-12-01 22:58:14.792452939 +0000 UTC m=+0.102572319 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git)
Dec  1 22:58:14 compute-0 podman[253016]: 2025-12-01 22:58:14.79531348 +0000 UTC m=+0.112724357 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 22:58:14 compute-0 podman[253014]: 2025-12-01 22:58:14.817108378 +0000 UTC m=+0.137910051 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:58:14 compute-0 nova_compute[189508]: 2025-12-01 22:58:14.830 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629879.828621, 691446f5-d3d8-4a4f-a161-f2058a04a59d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:14 compute-0 nova_compute[189508]: 2025-12-01 22:58:14.830 189512 INFO nova.compute.manager [-] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:58:14 compute-0 nova_compute[189508]: 2025-12-01 22:58:14.854 189512 DEBUG nova.compute.manager [None req-91cd224c-1830-4e97-90f4-cdc4389f7031 - - - - - -] [instance: 691446f5-d3d8-4a4f-a161-f2058a04a59d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:15 compute-0 nova_compute[189508]: 2025-12-01 22:58:15.557 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:15 compute-0 nova_compute[189508]: 2025-12-01 22:58:15.904 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:15 compute-0 nova_compute[189508]: 2025-12-01 22:58:15.972 189512 DEBUG nova.network.neutron [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Successfully created port: fdb7b491-6ff3-42d8-ba52-cdb8d280c17b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:58:17 compute-0 ovn_controller[97770]: 2025-12-01T22:58:17Z|00121|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:58:17 compute-0 ovn_controller[97770]: 2025-12-01T22:58:17Z|00122|binding|INFO|Releasing lport c21d900e-9830-49c7-a1df-ef9de7493e3f from this chassis (sb_readonly=0)
Dec  1 22:58:17 compute-0 nova_compute[189508]: 2025-12-01 22:58:17.559 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:17 compute-0 nova_compute[189508]: 2025-12-01 22:58:17.935 189512 DEBUG nova.network.neutron [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Successfully updated port: fdb7b491-6ff3-42d8-ba52-cdb8d280c17b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:58:17 compute-0 nova_compute[189508]: 2025-12-01 22:58:17.954 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:17 compute-0 nova_compute[189508]: 2025-12-01 22:58:17.955 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquired lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:17 compute-0 nova_compute[189508]: 2025-12-01 22:58:17.955 189512 DEBUG nova.network.neutron [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:58:18 compute-0 nova_compute[189508]: 2025-12-01 22:58:18.108 189512 DEBUG nova.compute.manager [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received event network-changed-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:18 compute-0 nova_compute[189508]: 2025-12-01 22:58:18.109 189512 DEBUG nova.compute.manager [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Refreshing instance network info cache due to event network-changed-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:18 compute-0 nova_compute[189508]: 2025-12-01 22:58:18.109 189512 DEBUG oslo_concurrency.lockutils [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:18 compute-0 nova_compute[189508]: 2025-12-01 22:58:18.175 189512 DEBUG nova.network.neutron [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:58:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:18.372 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:58:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:18.375 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:58:18 compute-0 nova_compute[189508]: 2025-12-01 22:58:18.378 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.319 189512 DEBUG nova.network.neutron [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Updating instance_info_cache with network_info: [{"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.340 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Releasing lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.341 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Instance network_info: |[{"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.342 189512 DEBUG oslo_concurrency.lockutils [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.343 189512 DEBUG nova.network.neutron [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Refreshing network info cache for port fdb7b491-6ff3-42d8-ba52-cdb8d280c17b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.346 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Start _get_guest_xml network_info=[{"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.355 189512 WARNING nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.365 189512 DEBUG nova.virt.libvirt.host [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.366 189512 DEBUG nova.virt.libvirt.host [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.371 189512 DEBUG nova.virt.libvirt.host [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.372 189512 DEBUG nova.virt.libvirt.host [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.373 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.373 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.374 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.374 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.375 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.375 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.376 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.376 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.377 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.377 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.378 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.378 189512 DEBUG nova.virt.hardware [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.382 189512 DEBUG nova.virt.libvirt.vif [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:58:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-158689313',display_name='tempest-TestServerBasicOps-server-158689313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-158689313',id=12,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdLU2XWR0D9/TV5zDcfDyB8kEnTGGiGQva7AuOv6B+LBv56eiAYC8WmrwJdgsugY1wRFkht/o9yr8+gyoh/ocnB+FJdcaoz459gvb4M95yZUZ9pYKJl6veahcNY5ap2bg==',key_name='tempest-TestServerBasicOps-553115585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d415954cbc84272b9bc26d3d8a3a591',ramdisk_id='',reservation_id='r-ho1w8rch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-708531377',owner_user_name='tempest-TestServerBasicOps-708531377-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:58:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='376b22ff1d4b4216a3013dc170064403',uuid=d35b993a-ba2a-478d-b7f6-c7dfba36d402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.383 189512 DEBUG nova.network.os_vif_util [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Converting VIF {"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.384 189512 DEBUG nova.network.os_vif_util [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b,network=Network(27ca9db6-6725-47fe-b0f9-957bed1ac95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb7b491-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.385 189512 DEBUG nova.objects.instance [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lazy-loading 'pci_devices' on Instance uuid d35b993a-ba2a-478d-b7f6-c7dfba36d402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.401 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <uuid>d35b993a-ba2a-478d-b7f6-c7dfba36d402</uuid>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <name>instance-0000000c</name>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <nova:name>tempest-TestServerBasicOps-server-158689313</nova:name>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:58:20</nova:creationTime>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:user uuid="376b22ff1d4b4216a3013dc170064403">tempest-TestServerBasicOps-708531377-project-member</nova:user>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:project uuid="5d415954cbc84272b9bc26d3d8a3a591">tempest-TestServerBasicOps-708531377</nova:project>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        <nova:port uuid="fdb7b491-6ff3-42d8-ba52-cdb8d280c17b">
Dec  1 22:58:20 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <system>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <entry name="serial">d35b993a-ba2a-478d-b7f6-c7dfba36d402</entry>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <entry name="uuid">d35b993a-ba2a-478d-b7f6-c7dfba36d402</entry>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </system>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <os>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  </os>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <features>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  </features>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk.config"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:bc:78:9d"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <target dev="tapfdb7b491-6f"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/console.log" append="off"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <video>
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </video>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:58:20 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:58:20 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:58:20 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:58:20 compute-0 nova_compute[189508]: </domain>
Dec  1 22:58:20 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.410 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Preparing to wait for external event network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.411 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.411 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.411 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.412 189512 DEBUG nova.virt.libvirt.vif [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:58:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-158689313',display_name='tempest-TestServerBasicOps-server-158689313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-158689313',id=12,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdLU2XWR0D9/TV5zDcfDyB8kEnTGGiGQva7AuOv6B+LBv56eiAYC8WmrwJdgsugY1wRFkht/o9yr8+gyoh/ocnB+FJdcaoz459gvb4M95yZUZ9pYKJl6veahcNY5ap2bg==',key_name='tempest-TestServerBasicOps-553115585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d415954cbc84272b9bc26d3d8a3a591',ramdisk_id='',reservation_id='r-ho1w8rch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-708531377',owner_user_name='tempest-TestServerBasicOps-708531377-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:58:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='376b22ff1d4b4216a3013dc170064403',uuid=d35b993a-ba2a-478d-b7f6-c7dfba36d402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.412 189512 DEBUG nova.network.os_vif_util [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Converting VIF {"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.412 189512 DEBUG nova.network.os_vif_util [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b,network=Network(27ca9db6-6725-47fe-b0f9-957bed1ac95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb7b491-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.412 189512 DEBUG os_vif [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b,network=Network(27ca9db6-6725-47fe-b0f9-957bed1ac95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb7b491-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.413 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.413 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.413 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.418 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.418 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb7b491-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.419 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdb7b491-6f, col_values=(('external_ids', {'iface-id': 'fdb7b491-6ff3-42d8-ba52-cdb8d280c17b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:78:9d', 'vm-uuid': 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:20 compute-0 NetworkManager[56278]: <info>  [1764629900.4231] manager: (tapfdb7b491-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.425 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.437 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.438 189512 INFO os_vif [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b,network=Network(27ca9db6-6725-47fe-b0f9-957bed1ac95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb7b491-6f')#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.524 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.524 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.525 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] No VIF found with MAC fa:16:3e:bc:78:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.525 189512 INFO nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Using config drive#033[00m
Dec  1 22:58:20 compute-0 nova_compute[189508]: 2025-12-01 22:58:20.558 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.147 189512 INFO nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Creating config drive at /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk.config#033[00m
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.156 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2tvi39dv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.307 189512 DEBUG oslo_concurrency.processutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2tvi39dv" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:21 compute-0 kernel: tapfdb7b491-6f: entered promiscuous mode
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.394 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:21 compute-0 ovn_controller[97770]: 2025-12-01T22:58:21Z|00123|binding|INFO|Claiming lport fdb7b491-6ff3-42d8-ba52-cdb8d280c17b for this chassis.
Dec  1 22:58:21 compute-0 ovn_controller[97770]: 2025-12-01T22:58:21Z|00124|binding|INFO|fdb7b491-6ff3-42d8-ba52-cdb8d280c17b: Claiming fa:16:3e:bc:78:9d 10.100.0.8
Dec  1 22:58:21 compute-0 NetworkManager[56278]: <info>  [1764629901.3978] manager: (tapfdb7b491-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.401 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:78:9d 10.100.0.8'], port_security=['fa:16:3e:bc:78:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd35b993a-ba2a-478d-b7f6-c7dfba36d402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d415954cbc84272b9bc26d3d8a3a591', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0f011a8-001b-403a-aba7-ce71ccfb1571 f3fb426f-e7e3-4d56-8f7b-ee20f8ed572d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5337bcc8-8621-410a-b025-ec1f57d87929, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.404 106662 INFO neutron.agent.ovn.metadata.agent [-] Port fdb7b491-6ff3-42d8-ba52-cdb8d280c17b in datapath 27ca9db6-6725-47fe-b0f9-957bed1ac95a bound to our chassis#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.406 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27ca9db6-6725-47fe-b0f9-957bed1ac95a#033[00m
Dec  1 22:58:21 compute-0 ovn_controller[97770]: 2025-12-01T22:58:21Z|00125|binding|INFO|Setting lport fdb7b491-6ff3-42d8-ba52-cdb8d280c17b ovn-installed in OVS
Dec  1 22:58:21 compute-0 ovn_controller[97770]: 2025-12-01T22:58:21Z|00126|binding|INFO|Setting lport fdb7b491-6ff3-42d8-ba52-cdb8d280c17b up in Southbound
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.425 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.431 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.432 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3a9a58-c88b-4e24-9139-2bc2f0d83756]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.433 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27ca9db6-61 in ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.439 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27ca9db6-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.439 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f6eb406f-cb67-4267-a2f5-69919b571db7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 systemd-udevd[253112]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.441 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[54c28c08-a585-4040-8a3c-93a2f09831c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.455 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[41acac8b-2f53-49dc-8eae-5a4e3be8444d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 NetworkManager[56278]: <info>  [1764629901.4599] device (tapfdb7b491-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:58:21 compute-0 NetworkManager[56278]: <info>  [1764629901.4644] device (tapfdb7b491-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:58:21 compute-0 systemd-machined[155759]: New machine qemu-12-instance-0000000c.
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.486 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[98d0e974-0468-45f9-93a6-816beee520d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.531 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[458c121b-54e0-4790-aa21-73ba4d3d51a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.541 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a14722-6dc2-4c69-a9c8-f0628de49641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 NetworkManager[56278]: <info>  [1764629901.5432] manager: (tap27ca9db6-60): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Dec  1 22:58:21 compute-0 systemd-udevd[253116]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.585 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e495b3-cd60-4010-bf36-fa3c4b754069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.588 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbf81f3-211d-42cc-ac38-78a1b56c17ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 NetworkManager[56278]: <info>  [1764629901.6179] device (tap27ca9db6-60): carrier: link connected
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.626 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3d530a-803e-446f-a400-ba0c78bec282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.669 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[bc99d627-399d-4e72-9ba2-7120a5333d05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27ca9db6-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:10:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541118, 'reachable_time': 37481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253148, 'error': None, 'target': 'ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.696 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf71620-38ff-446d-8184-c5a20f7931ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:1013'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541118, 'tstamp': 541118}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253149, 'error': None, 'target': 'ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.719 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d688863c-3f34-47da-8560-41fe394f5e45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27ca9db6-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:10:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541118, 'reachable_time': 37481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253150, 'error': None, 'target': 'ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.765 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f9861975-bd4c-4f39-ae85-ff8f789ee270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.847 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8129f16a-dc66-4775-ac4b-f7303c50ccb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.849 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27ca9db6-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.849 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.850 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27ca9db6-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:21 compute-0 kernel: tap27ca9db6-60: entered promiscuous mode
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.859 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27ca9db6-60, col_values=(('external_ids', {'iface-id': 'd9e68375-1082-4c9a-a109-193f8ca4a785'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:21 compute-0 NetworkManager[56278]: <info>  [1764629901.8611] manager: (tap27ca9db6-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.860 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:21 compute-0 ovn_controller[97770]: 2025-12-01T22:58:21Z|00127|binding|INFO|Releasing lport d9e68375-1082-4c9a-a109-193f8ca4a785 from this chassis (sb_readonly=0)
Dec  1 22:58:21 compute-0 nova_compute[189508]: 2025-12-01 22:58:21.887 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.887 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27ca9db6-6725-47fe-b0f9-957bed1ac95a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27ca9db6-6725-47fe-b0f9-957bed1ac95a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.889 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c29178ff-ebce-4423-8b6d-f18309edd6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.892 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-27ca9db6-6725-47fe-b0f9-957bed1ac95a
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/27ca9db6-6725-47fe-b0f9-957bed1ac95a.pid.haproxy
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID 27ca9db6-6725-47fe-b0f9-957bed1ac95a
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:58:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:21.895 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'env', 'PROCESS_TAG=haproxy-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27ca9db6-6725-47fe-b0f9-957bed1ac95a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.369 189512 DEBUG nova.compute.manager [req-4c47752b-ec8d-44b1-8c86-ae45d3d8e9f3 req-c806bb9d-e4ce-4166-bda3-8302a00877c5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received event network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.370 189512 DEBUG oslo_concurrency.lockutils [req-4c47752b-ec8d-44b1-8c86-ae45d3d8e9f3 req-c806bb9d-e4ce-4166-bda3-8302a00877c5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.371 189512 DEBUG oslo_concurrency.lockutils [req-4c47752b-ec8d-44b1-8c86-ae45d3d8e9f3 req-c806bb9d-e4ce-4166-bda3-8302a00877c5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.371 189512 DEBUG oslo_concurrency.lockutils [req-4c47752b-ec8d-44b1-8c86-ae45d3d8e9f3 req-c806bb9d-e4ce-4166-bda3-8302a00877c5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.371 189512 DEBUG nova.compute.manager [req-4c47752b-ec8d-44b1-8c86-ae45d3d8e9f3 req-c806bb9d-e4ce-4166-bda3-8302a00877c5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Processing event network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:58:22 compute-0 podman[253181]: 2025-12-01 22:58:22.442336621 +0000 UTC m=+0.109536987 container create 57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:58:22 compute-0 podman[253181]: 2025-12-01 22:58:22.376677489 +0000 UTC m=+0.043877915 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:58:22 compute-0 systemd[1]: Started libpod-conmon-57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c.scope.
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.535 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.537 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629902.5361955, d35b993a-ba2a-478d-b7f6-c7dfba36d402 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.537 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] VM Started (Lifecycle Event)#033[00m
Dec  1 22:58:22 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdc25ab0297a68887887c490478d614effa6af2a600c832f09635911f4a9a599/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.569 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.580 189512 INFO nova.virt.libvirt.driver [-] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Instance spawned successfully.#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.581 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:58:22 compute-0 podman[253181]: 2025-12-01 22:58:22.59077787 +0000 UTC m=+0.257978286 container init 57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  1 22:58:22 compute-0 podman[253181]: 2025-12-01 22:58:22.599213149 +0000 UTC m=+0.266413505 container start 57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.604 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.613 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.618 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.618 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.619 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.620 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:22 compute-0 neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253203]: [NOTICE]   (253207) : New worker (253209) forked
Dec  1 22:58:22 compute-0 neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253203]: [NOTICE]   (253207) : Loading success.
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.633 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.634 189512 DEBUG nova.virt.libvirt.driver [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.645 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.647 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629902.5364451, d35b993a-ba2a-478d-b7f6-c7dfba36d402 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.648 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.680 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.687 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629902.571259, d35b993a-ba2a-478d-b7f6-c7dfba36d402 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.699 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.719 189512 INFO nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Took 10.01 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.719 189512 DEBUG nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.751 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.763 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.813 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.844 189512 INFO nova.compute.manager [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Took 10.66 seconds to build instance.#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.864 189512 DEBUG oslo_concurrency.lockutils [None req-85b886be-41ab-4e60-9378-f3549c566f5a 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.893 189512 DEBUG nova.network.neutron [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Updated VIF entry in instance network info cache for port fdb7b491-6ff3-42d8-ba52-cdb8d280c17b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.894 189512 DEBUG nova.network.neutron [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Updating instance_info_cache with network_info: [{"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:22 compute-0 nova_compute[189508]: 2025-12-01 22:58:22.910 189512 DEBUG oslo_concurrency.lockutils [req-dac4212e-83af-432c-90cc-22dca4c41394 req-dcce12d8-87a1-471d-afae-ee93d7f1d946 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:22 compute-0 ovn_controller[97770]: 2025-12-01T22:58:22Z|00128|binding|INFO|Releasing lport d9e68375-1082-4c9a-a109-193f8ca4a785 from this chassis (sb_readonly=0)
Dec  1 22:58:22 compute-0 ovn_controller[97770]: 2025-12-01T22:58:22Z|00129|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:58:22 compute-0 ovn_controller[97770]: 2025-12-01T22:58:22Z|00130|binding|INFO|Releasing lport c21d900e-9830-49c7-a1df-ef9de7493e3f from this chassis (sb_readonly=0)
Dec  1 22:58:23 compute-0 nova_compute[189508]: 2025-12-01 22:58:23.041 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:24 compute-0 nova_compute[189508]: 2025-12-01 22:58:24.692 189512 DEBUG nova.compute.manager [req-25cbea24-fd6d-4b07-8228-8b4ffa849a83 req-c6869406-c558-46af-a3c3-dee1a769b379 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received event network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:24 compute-0 nova_compute[189508]: 2025-12-01 22:58:24.692 189512 DEBUG oslo_concurrency.lockutils [req-25cbea24-fd6d-4b07-8228-8b4ffa849a83 req-c6869406-c558-46af-a3c3-dee1a769b379 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:24 compute-0 nova_compute[189508]: 2025-12-01 22:58:24.692 189512 DEBUG oslo_concurrency.lockutils [req-25cbea24-fd6d-4b07-8228-8b4ffa849a83 req-c6869406-c558-46af-a3c3-dee1a769b379 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:24 compute-0 nova_compute[189508]: 2025-12-01 22:58:24.692 189512 DEBUG oslo_concurrency.lockutils [req-25cbea24-fd6d-4b07-8228-8b4ffa849a83 req-c6869406-c558-46af-a3c3-dee1a769b379 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:24 compute-0 nova_compute[189508]: 2025-12-01 22:58:24.693 189512 DEBUG nova.compute.manager [req-25cbea24-fd6d-4b07-8228-8b4ffa849a83 req-c6869406-c558-46af-a3c3-dee1a769b379 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] No waiting events found dispatching network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:58:24 compute-0 nova_compute[189508]: 2025-12-01 22:58:24.693 189512 WARNING nova.compute.manager [req-25cbea24-fd6d-4b07-8228-8b4ffa849a83 req-c6869406-c558-46af-a3c3-dee1a769b379 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received unexpected event network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b for instance with vm_state active and task_state None.#033[00m
Dec  1 22:58:25 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:25.378 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:25 compute-0 nova_compute[189508]: 2025-12-01 22:58:25.422 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:25 compute-0 nova_compute[189508]: 2025-12-01 22:58:25.560 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:27 compute-0 nova_compute[189508]: 2025-12-01 22:58:27.431 189512 DEBUG nova.compute.manager [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received event network-changed-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:27 compute-0 nova_compute[189508]: 2025-12-01 22:58:27.433 189512 DEBUG nova.compute.manager [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Refreshing instance network info cache due to event network-changed-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:27 compute-0 nova_compute[189508]: 2025-12-01 22:58:27.434 189512 DEBUG oslo_concurrency.lockutils [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:27 compute-0 nova_compute[189508]: 2025-12-01 22:58:27.434 189512 DEBUG oslo_concurrency.lockutils [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:27 compute-0 nova_compute[189508]: 2025-12-01 22:58:27.434 189512 DEBUG nova.network.neutron [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Refreshing network info cache for port fdb7b491-6ff3-42d8-ba52-cdb8d280c17b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:29 compute-0 ovn_controller[97770]: 2025-12-01T22:58:29Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:9d:a6 10.100.0.10
Dec  1 22:58:29 compute-0 ovn_controller[97770]: 2025-12-01T22:58:29Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:9d:a6 10.100.0.10
Dec  1 22:58:29 compute-0 podman[203693]: time="2025-12-01T22:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:58:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Dec  1 22:58:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5733 "" "Go-http-client/1.1"
Dec  1 22:58:30 compute-0 nova_compute[189508]: 2025-12-01 22:58:30.425 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:30 compute-0 nova_compute[189508]: 2025-12-01 22:58:30.564 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:30 compute-0 podman[253230]: 2025-12-01 22:58:30.859450697 +0000 UTC m=+0.122721921 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:58:31 compute-0 openstack_network_exporter[205887]: ERROR   22:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:58:31 compute-0 openstack_network_exporter[205887]: ERROR   22:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:58:31 compute-0 openstack_network_exporter[205887]: ERROR   22:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:58:31 compute-0 openstack_network_exporter[205887]: ERROR   22:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:58:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:58:31 compute-0 openstack_network_exporter[205887]: ERROR   22:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:58:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:58:32 compute-0 podman[253254]: 2025-12-01 22:58:32.932714094 +0000 UTC m=+0.220151633 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 22:58:33 compute-0 nova_compute[189508]: 2025-12-01 22:58:33.287 189512 DEBUG nova.network.neutron [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Updated VIF entry in instance network info cache for port fdb7b491-6ff3-42d8-ba52-cdb8d280c17b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:58:33 compute-0 nova_compute[189508]: 2025-12-01 22:58:33.289 189512 DEBUG nova.network.neutron [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Updating instance_info_cache with network_info: [{"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:33 compute-0 nova_compute[189508]: 2025-12-01 22:58:33.325 189512 DEBUG oslo_concurrency.lockutils [req-b82d525e-607f-4268-bad4-6c3d50cbc7cd req-84bcc172-925f-4443-a1c4-f8ac525f979b c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-d35b993a-ba2a-478d-b7f6-c7dfba36d402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:34 compute-0 podman[253274]: 2025-12-01 22:58:34.821654805 +0000 UTC m=+0.100440509 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 22:58:35 compute-0 nova_compute[189508]: 2025-12-01 22:58:35.429 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:35 compute-0 nova_compute[189508]: 2025-12-01 22:58:35.566 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:35 compute-0 nova_compute[189508]: 2025-12-01 22:58:35.631 189512 INFO nova.compute.manager [None req-75d59541-72f0-422c-a4bc-dd2d3855b2b4 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Get console output#033[00m
Dec  1 22:58:35 compute-0 nova_compute[189508]: 2025-12-01 22:58:35.796 239842 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 22:58:39 compute-0 nova_compute[189508]: 2025-12-01 22:58:39.301 189512 DEBUG nova.compute.manager [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-changed-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:39 compute-0 nova_compute[189508]: 2025-12-01 22:58:39.303 189512 DEBUG nova.compute.manager [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Refreshing instance network info cache due to event network-changed-02f1eac6-306c-4fa9-82c7-6e9082828c65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:39 compute-0 nova_compute[189508]: 2025-12-01 22:58:39.303 189512 DEBUG oslo_concurrency.lockutils [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:39 compute-0 nova_compute[189508]: 2025-12-01 22:58:39.304 189512 DEBUG oslo_concurrency.lockutils [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:39 compute-0 nova_compute[189508]: 2025-12-01 22:58:39.305 189512 DEBUG nova.network.neutron [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Refreshing network info cache for port 02f1eac6-306c-4fa9-82c7-6e9082828c65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:40 compute-0 nova_compute[189508]: 2025-12-01 22:58:40.431 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:40 compute-0 nova_compute[189508]: 2025-12-01 22:58:40.568 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:40 compute-0 podman[253303]: 2025-12-01 22:58:40.781239058 +0000 UTC m=+0.062443521 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  1 22:58:40 compute-0 podman[253302]: 2025-12-01 22:58:40.812723121 +0000 UTC m=+0.094892281 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  1 22:58:41 compute-0 nova_compute[189508]: 2025-12-01 22:58:41.006 189512 DEBUG nova.network.neutron [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updated VIF entry in instance network info cache for port 02f1eac6-306c-4fa9-82c7-6e9082828c65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:58:41 compute-0 nova_compute[189508]: 2025-12-01 22:58:41.006 189512 DEBUG nova.network.neutron [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updating instance_info_cache with network_info: [{"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:41 compute-0 nova_compute[189508]: 2025-12-01 22:58:41.024 189512 DEBUG oslo_concurrency.lockutils [req-dcd37154-2b70-401b-8901-56ab6fe0e9ba req-326e8bf2-39f8-422e-9597-54decb15bc8a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:42 compute-0 ovn_controller[97770]: 2025-12-01T22:58:42Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:3e:a0 10.100.0.6
Dec  1 22:58:42 compute-0 ovn_controller[97770]: 2025-12-01T22:58:42Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:3e:a0 10.100.0.6
Dec  1 22:58:45 compute-0 nova_compute[189508]: 2025-12-01 22:58:45.370 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:45 compute-0 nova_compute[189508]: 2025-12-01 22:58:45.434 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:45 compute-0 nova_compute[189508]: 2025-12-01 22:58:45.572 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:45 compute-0 nova_compute[189508]: 2025-12-01 22:58:45.828 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:45 compute-0 nova_compute[189508]: 2025-12-01 22:58:45.828 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:45 compute-0 podman[253348]: 2025-12-01 22:58:45.857216955 +0000 UTC m=+0.098132424 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Dec  1 22:58:45 compute-0 podman[253346]: 2025-12-01 22:58:45.875525844 +0000 UTC m=+0.132749305 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 22:58:45 compute-0 podman[253345]: 2025-12-01 22:58:45.883877741 +0000 UTC m=+0.147607087 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 22:58:45 compute-0 podman[253347]: 2025-12-01 22:58:45.902453168 +0000 UTC m=+0.151961530 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Dec  1 22:58:45 compute-0 nova_compute[189508]: 2025-12-01 22:58:45.939 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.016 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.017 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.027 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.028 189512 INFO nova.compute.claims [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.173 189512 DEBUG nova.compute.provider_tree [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.191 189512 DEBUG nova.scheduler.client.report [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.222 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.223 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.283 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.284 189512 DEBUG nova.network.neutron [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.309 189512 INFO nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.324 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.437 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.439 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.440 189512 INFO nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Creating image(s)#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.441 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "/var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.441 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "/var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.442 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "/var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.455 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.551 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.552 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.553 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.565 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.622 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.623 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.680 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270,backing_fmt=raw /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk 1073741824" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.681 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.682 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.755 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.756 189512 DEBUG nova.virt.disk.api [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Checking if we can resize image /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.757 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.778 189512 DEBUG nova.policy [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '786ce878f1d2401ab2375f67e5ebd78b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43a7ae6a25114fd199de68dfe3d3217b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.817 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.818 189512 DEBUG nova.virt.disk.api [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Cannot resize image /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.818 189512 DEBUG nova.objects.instance [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lazy-loading 'migration_context' on Instance uuid a4f50c75-4c0a-4222-a614-20d83eba9a2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.837 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.838 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Ensure instance console log exists: /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.839 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.840 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:46 compute-0 nova_compute[189508]: 2025-12-01 22:58:46.840 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:47 compute-0 nova_compute[189508]: 2025-12-01 22:58:47.833 189512 DEBUG nova.network.neutron [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Successfully created port: 92958b22-0bb7-41c6-9850-61c81cea56d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 22:58:48 compute-0 nova_compute[189508]: 2025-12-01 22:58:48.193 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.496 189512 DEBUG nova.network.neutron [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Successfully updated port: 92958b22-0bb7-41c6-9850-61c81cea56d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.543 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.544 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquired lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.544 189512 DEBUG nova.network.neutron [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.701 189512 DEBUG nova.compute.manager [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-changed-92958b22-0bb7-41c6-9850-61c81cea56d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.702 189512 DEBUG nova.compute.manager [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Refreshing instance network info cache due to event network-changed-92958b22-0bb7-41c6-9850-61c81cea56d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.702 189512 DEBUG oslo_concurrency.lockutils [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:49 compute-0 nova_compute[189508]: 2025-12-01 22:58:49.773 189512 DEBUG nova.network.neutron [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.438 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.576 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.857 189512 DEBUG nova.network.neutron [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Updating instance_info_cache with network_info: [{"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.890 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Releasing lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.891 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Instance network_info: |[{"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.892 189512 DEBUG oslo_concurrency.lockutils [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.893 189512 DEBUG nova.network.neutron [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Refreshing network info cache for port 92958b22-0bb7-41c6-9850-61c81cea56d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.898 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Start _get_guest_xml network_info=[{"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.911 189512 WARNING nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.939 189512 DEBUG nova.virt.libvirt.host [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.941 189512 DEBUG nova.virt.libvirt.host [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.947 189512 DEBUG nova.virt.libvirt.host [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.949 189512 DEBUG nova.virt.libvirt.host [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.949 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.950 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T22:55:21Z,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='af2fbf0e1b5f40c19aed69d241db7727',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T22:55:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.951 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.952 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.953 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.954 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.954 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.955 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.956 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.956 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.957 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.958 189512 DEBUG nova.virt.hardware [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.961 189512 DEBUG nova.virt.libvirt.vif [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:58:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1443491023',display_name='tempest-TestNetworkBasicOps-server-1443491023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1443491023',id=13,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO5/vETY/it++hsDSzhTJNzHqx2Ih5naRH2QDqJ/NpOo3aoxUADDOFLjhO4K6mh2gX88uJUq6wuasKMqVILKGhtLSRmx2p7LIM/ZzaRAEfijcPif/+1DksRYivz9VOHF8g==',key_name='tempest-TestNetworkBasicOps-940390349',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43a7ae6a25114fd199de68dfe3d3217b',ramdisk_id='',reservation_id='r-2n91xcbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1418827846',owner_user_name='tempest-TestNetworkBasicOps-1418827846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:58:46Z,user_data=None,user_id='786ce878f1d2401ab2375f67e5ebd78b',uuid=a4f50c75-4c0a-4222-a614-20d83eba9a2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.961 189512 DEBUG nova.network.os_vif_util [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converting VIF {"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.962 189512 DEBUG nova.network.os_vif_util [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:2b:96,bridge_name='br-int',has_traffic_filtering=True,id=92958b22-0bb7-41c6-9850-61c81cea56d8,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92958b22-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.962 189512 DEBUG nova.objects.instance [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lazy-loading 'pci_devices' on Instance uuid a4f50c75-4c0a-4222-a614-20d83eba9a2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.981 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <uuid>a4f50c75-4c0a-4222-a614-20d83eba9a2f</uuid>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <name>instance-0000000d</name>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <nova:name>tempest-TestNetworkBasicOps-server-1443491023</nova:name>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:58:50</nova:creationTime>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:user uuid="786ce878f1d2401ab2375f67e5ebd78b">tempest-TestNetworkBasicOps-1418827846-project-member</nova:user>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:project uuid="43a7ae6a25114fd199de68dfe3d3217b">tempest-TestNetworkBasicOps-1418827846</nova:project>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        <nova:port uuid="92958b22-0bb7-41c6-9850-61c81cea56d8">
Dec  1 22:58:50 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <system>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <entry name="serial">a4f50c75-4c0a-4222-a614-20d83eba9a2f</entry>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <entry name="uuid">a4f50c75-4c0a-4222-a614-20d83eba9a2f</entry>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </system>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <os>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  </os>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <features>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  </features>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk.config"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:5c:2b:96"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <target dev="tap92958b22-0b"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/console.log" append="off"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <video>
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </video>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:58:50 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:58:50 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:58:50 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:58:50 compute-0 nova_compute[189508]: </domain>
Dec  1 22:58:50 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.982 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Preparing to wait for external event network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.983 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.983 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:50 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.983 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.984 189512 DEBUG nova.virt.libvirt.vif [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T22:58:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1443491023',display_name='tempest-TestNetworkBasicOps-server-1443491023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1443491023',id=13,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO5/vETY/it++hsDSzhTJNzHqx2Ih5naRH2QDqJ/NpOo3aoxUADDOFLjhO4K6mh2gX88uJUq6wuasKMqVILKGhtLSRmx2p7LIM/ZzaRAEfijcPif/+1DksRYivz9VOHF8g==',key_name='tempest-TestNetworkBasicOps-940390349',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43a7ae6a25114fd199de68dfe3d3217b',ramdisk_id='',reservation_id='r-2n91xcbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1418827846',owner_user_name='tempest-TestNetworkBasicOps-1418827846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:58:46Z,user_data=None,user_id='786ce878f1d2401ab2375f67e5ebd78b',uuid=a4f50c75-4c0a-4222-a614-20d83eba9a2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.984 189512 DEBUG nova.network.os_vif_util [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converting VIF {"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.985 189512 DEBUG nova.network.os_vif_util [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:2b:96,bridge_name='br-int',has_traffic_filtering=True,id=92958b22-0bb7-41c6-9850-61c81cea56d8,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92958b22-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.985 189512 DEBUG os_vif [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:2b:96,bridge_name='br-int',has_traffic_filtering=True,id=92958b22-0bb7-41c6-9850-61c81cea56d8,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92958b22-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.986 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.986 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.987 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.989 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.989 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92958b22-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.990 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92958b22-0b, col_values=(('external_ids', {'iface-id': '92958b22-0bb7-41c6-9850-61c81cea56d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:2b:96', 'vm-uuid': 'a4f50c75-4c0a-4222-a614-20d83eba9a2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.992 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:51 compute-0 NetworkManager[56278]: <info>  [1764629930.9945] manager: (tap92958b22-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:50.994 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.008 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.009 189512 INFO os_vif [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:2b:96,bridge_name='br-int',has_traffic_filtering=True,id=92958b22-0bb7-41c6-9850-61c81cea56d8,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92958b22-0b')#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.084 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.084 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.085 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] No VIF found with MAC fa:16:3e:5c:2b:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.085 189512 INFO nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Using config drive#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.777 189512 INFO nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Creating config drive at /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk.config#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.789 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy35i4wh2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:58:51 compute-0 nova_compute[189508]: 2025-12-01 22:58:51.939 189512 DEBUG oslo_concurrency.processutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy35i4wh2" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:58:52 compute-0 NetworkManager[56278]: <info>  [1764629932.0441] manager: (tap92958b22-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Dec  1 22:58:52 compute-0 kernel: tap92958b22-0b: entered promiscuous mode
Dec  1 22:58:52 compute-0 ovn_controller[97770]: 2025-12-01T22:58:52Z|00131|binding|INFO|Claiming lport 92958b22-0bb7-41c6-9850-61c81cea56d8 for this chassis.
Dec  1 22:58:52 compute-0 ovn_controller[97770]: 2025-12-01T22:58:52Z|00132|binding|INFO|92958b22-0bb7-41c6-9850-61c81cea56d8: Claiming fa:16:3e:5c:2b:96 10.100.0.7
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.053 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:52 compute-0 ovn_controller[97770]: 2025-12-01T22:58:52Z|00133|binding|INFO|Setting lport 92958b22-0bb7-41c6-9850-61c81cea56d8 ovn-installed in OVS
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.082 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.088 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:52 compute-0 systemd-udevd[253457]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:58:52 compute-0 systemd-machined[155759]: New machine qemu-13-instance-0000000d.
Dec  1 22:58:52 compute-0 NetworkManager[56278]: <info>  [1764629932.1229] device (tap92958b22-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:58:52 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Dec  1 22:58:52 compute-0 NetworkManager[56278]: <info>  [1764629932.1253] device (tap92958b22-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:58:52 compute-0 ovn_controller[97770]: 2025-12-01T22:58:52Z|00134|binding|INFO|Setting lport 92958b22-0bb7-41c6-9850-61c81cea56d8 up in Southbound
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.147 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:2b:96 10.100.0.7'], port_security=['fa:16:3e:5c:2b:96 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a4f50c75-4c0a-4222-a614-20d83eba9a2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-513808ab-c863-4790-88e3-b64040a0ed8a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43a7ae6a25114fd199de68dfe3d3217b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4de5fd7e-e0c4-4a2c-a479-6e7aa60056a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e643dba6-de01-4938-9750-33d8ce8dfa77, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=92958b22-0bb7-41c6-9850-61c81cea56d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.148 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 92958b22-0bb7-41c6-9850-61c81cea56d8 in datapath 513808ab-c863-4790-88e3-b64040a0ed8a bound to our chassis#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.153 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 513808ab-c863-4790-88e3-b64040a0ed8a#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.183 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[21954b0d-4473-4db9-b7e9-c01d534eece2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.211 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[d34c3531-0446-4efb-b84d-d76606e9f932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.215 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3974ac-7793-4f17-8538-c677941cebbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.246 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[466b9850-8f3b-4f8d-af30-61fd88710191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.264 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0a725c3b-949c-40fc-82e7-1f1e4c122fce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap513808ab-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537641, 'reachable_time': 30370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253471, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.280 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea7e996-26da-4d80-8192-ba310fd74f46]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap513808ab-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537656, 'tstamp': 537656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253472, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap513808ab-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537660, 'tstamp': 537660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253472, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.283 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513808ab-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.284 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.285 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.286 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap513808ab-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.287 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.287 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap513808ab-c0, col_values=(('external_ids', {'iface-id': 'c21d900e-9830-49c7-a1df-ef9de7493e3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:58:52 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:58:52.288 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.700 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629932.699817, a4f50c75-4c0a-4222-a614-20d83eba9a2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.701 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] VM Started (Lifecycle Event)#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.732 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.741 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629932.700644, a4f50c75-4c0a-4222-a614-20d83eba9a2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.742 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] VM Paused (Lifecycle Event)#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.769 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.778 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:58:52 compute-0 nova_compute[189508]: 2025-12-01 22:58:52.803 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.141 189512 DEBUG nova.compute.manager [req-0fffae13-cf09-42e3-909b-046aed5a3972 req-2a79f06e-1976-488f-839a-aea6640ac974 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.141 189512 DEBUG oslo_concurrency.lockutils [req-0fffae13-cf09-42e3-909b-046aed5a3972 req-2a79f06e-1976-488f-839a-aea6640ac974 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.142 189512 DEBUG oslo_concurrency.lockutils [req-0fffae13-cf09-42e3-909b-046aed5a3972 req-2a79f06e-1976-488f-839a-aea6640ac974 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.142 189512 DEBUG oslo_concurrency.lockutils [req-0fffae13-cf09-42e3-909b-046aed5a3972 req-2a79f06e-1976-488f-839a-aea6640ac974 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.142 189512 DEBUG nova.compute.manager [req-0fffae13-cf09-42e3-909b-046aed5a3972 req-2a79f06e-1976-488f-839a-aea6640ac974 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Processing event network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.143 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.155 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629933.1547742, a4f50c75-4c0a-4222-a614-20d83eba9a2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.155 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.156 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.161 189512 INFO nova.virt.libvirt.driver [-] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Instance spawned successfully.#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.161 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.177 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.185 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.188 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.188 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.188 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.189 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.189 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.189 189512 DEBUG nova.virt.libvirt.driver [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.215 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.254 189512 INFO nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Took 6.82 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.254 189512 DEBUG nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.342 189512 DEBUG nova.network.neutron [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Updated VIF entry in instance network info cache for port 92958b22-0bb7-41c6-9850-61c81cea56d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.343 189512 DEBUG nova.network.neutron [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Updating instance_info_cache with network_info: [{"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.364 189512 DEBUG oslo_concurrency.lockutils [req-00b35c23-a418-49ce-82d2-b5ef81a88e0a req-cc57b87d-a8f9-4528-b3ef-5e2521043936 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.381 189512 INFO nova.compute.manager [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Took 7.39 seconds to build instance.#033[00m
Dec  1 22:58:53 compute-0 nova_compute[189508]: 2025-12-01 22:58:53.401 189512 DEBUG oslo_concurrency.lockutils [None req-8c9f1183-f44c-4517-8e38-6aa86d30be33 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.450 189512 DEBUG nova.compute.manager [req-df8a3ab6-a03e-49fc-9895-dc8284684290 req-9b186729-4404-4bda-80de-af09484eace9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.452 189512 DEBUG oslo_concurrency.lockutils [req-df8a3ab6-a03e-49fc-9895-dc8284684290 req-9b186729-4404-4bda-80de-af09484eace9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.452 189512 DEBUG oslo_concurrency.lockutils [req-df8a3ab6-a03e-49fc-9895-dc8284684290 req-9b186729-4404-4bda-80de-af09484eace9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.453 189512 DEBUG oslo_concurrency.lockutils [req-df8a3ab6-a03e-49fc-9895-dc8284684290 req-9b186729-4404-4bda-80de-af09484eace9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.453 189512 DEBUG nova.compute.manager [req-df8a3ab6-a03e-49fc-9895-dc8284684290 req-9b186729-4404-4bda-80de-af09484eace9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] No waiting events found dispatching network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.453 189512 WARNING nova.compute.manager [req-df8a3ab6-a03e-49fc-9895-dc8284684290 req-9b186729-4404-4bda-80de-af09484eace9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received unexpected event network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.578 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:55 compute-0 nova_compute[189508]: 2025-12-01 22:58:55.993 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:58:56 compute-0 nova_compute[189508]: 2025-12-01 22:58:56.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:58:56 compute-0 nova_compute[189508]: 2025-12-01 22:58:56.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:58:56 compute-0 nova_compute[189508]: 2025-12-01 22:58:56.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 22:58:56 compute-0 ovn_controller[97770]: 2025-12-01T22:58:56Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:78:9d 10.100.0.8
Dec  1 22:58:56 compute-0 ovn_controller[97770]: 2025-12-01T22:58:56Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:78:9d 10.100.0.8
Dec  1 22:58:56 compute-0 nova_compute[189508]: 2025-12-01 22:58:56.920 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:56 compute-0 nova_compute[189508]: 2025-12-01 22:58:56.921 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:56 compute-0 nova_compute[189508]: 2025-12-01 22:58:56.921 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:58:56 compute-0 nova_compute[189508]: 2025-12-01 22:58:56.921 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6a2b0a2e-1144-4264-917f-086024e18bed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:58:59 compute-0 nova_compute[189508]: 2025-12-01 22:58:59.636 189512 DEBUG nova.compute.manager [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-changed-92958b22-0bb7-41c6-9850-61c81cea56d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:58:59 compute-0 nova_compute[189508]: 2025-12-01 22:58:59.637 189512 DEBUG nova.compute.manager [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Refreshing instance network info cache due to event network-changed-92958b22-0bb7-41c6-9850-61c81cea56d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 22:58:59 compute-0 nova_compute[189508]: 2025-12-01 22:58:59.637 189512 DEBUG oslo_concurrency.lockutils [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:58:59 compute-0 nova_compute[189508]: 2025-12-01 22:58:59.638 189512 DEBUG oslo_concurrency.lockutils [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:58:59 compute-0 nova_compute[189508]: 2025-12-01 22:58:59.638 189512 DEBUG nova.network.neutron [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Refreshing network info cache for port 92958b22-0bb7-41c6-9850-61c81cea56d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 22:58:59 compute-0 podman[203693]: time="2025-12-01T22:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:58:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Dec  1 22:58:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5732 "" "Go-http-client/1.1"
Dec  1 22:58:59 compute-0 nova_compute[189508]: 2025-12-01 22:58:59.984 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updating instance_info_cache with network_info: [{"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.008 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-6a2b0a2e-1144-4264-917f-086024e18bed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.009 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.010 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.010 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.010 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.011 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.011 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.583 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:00 compute-0 nova_compute[189508]: 2025-12-01 22:59:00.995 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.035 189512 DEBUG nova.network.neutron [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Updated VIF entry in instance network info cache for port 92958b22-0bb7-41c6-9850-61c81cea56d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.035 189512 DEBUG nova.network.neutron [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Updating instance_info_cache with network_info: [{"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.055 189512 DEBUG oslo_concurrency.lockutils [req-d2b6eb2d-c74a-4080-af70-4ea94c9e1674 req-38c73cb9-fc16-47f6-9bdc-f827bcf480ff c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-a4f50c75-4c0a-4222-a614-20d83eba9a2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.230 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.231 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.231 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.232 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.373 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 openstack_network_exporter[205887]: ERROR   22:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:59:01 compute-0 openstack_network_exporter[205887]: ERROR   22:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:59:01 compute-0 openstack_network_exporter[205887]: ERROR   22:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:59:01 compute-0 openstack_network_exporter[205887]: ERROR   22:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:59:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:59:01 compute-0 openstack_network_exporter[205887]: ERROR   22:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:59:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:59:01 compute-0 podman[253510]: 2025-12-01 22:59:01.463422948 +0000 UTC m=+0.132601601 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.475 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.476 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.570 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.578 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.662 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.663 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.726 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.736 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.797 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.798 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.855 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.864 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.925 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.926 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:01 compute-0 nova_compute[189508]: 2025-12-01 22:59:01.988 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.423 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.424 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4710MB free_disk=72.07109069824219GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.425 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.425 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.536 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 6a2b0a2e-1144-4264-917f-086024e18bed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.536 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 4d450663-4303-4535-bc1a-72996000c25a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.536 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance d35b993a-ba2a-478d-b7f6-c7dfba36d402 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.536 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance a4f50c75-4c0a-4222-a614-20d83eba9a2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.537 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.537 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.675 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.695 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.715 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 22:59:02 compute-0 nova_compute[189508]: 2025-12-01 22:59:02.716 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:03 compute-0 podman[253559]: 2025-12-01 22:59:03.843072892 +0000 UTC m=+0.118075469 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:59:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:04.641 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:04.642 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:04.643 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:05 compute-0 nova_compute[189508]: 2025-12-01 22:59:05.586 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:05 compute-0 podman[253579]: 2025-12-01 22:59:05.853684282 +0000 UTC m=+0.129633525 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 22:59:05 compute-0 nova_compute[189508]: 2025-12-01 22:59:05.997 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:10 compute-0 nova_compute[189508]: 2025-12-01 22:59:10.588 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:10 compute-0 nova_compute[189508]: 2025-12-01 22:59:10.998 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:11 compute-0 podman[253601]: 2025-12-01 22:59:11.841004643 +0000 UTC m=+0.110136724 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 22:59:11 compute-0 podman[253600]: 2025-12-01 22:59:11.897937797 +0000 UTC m=+0.178201794 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  1 22:59:15 compute-0 nova_compute[189508]: 2025-12-01 22:59:15.592 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:16 compute-0 nova_compute[189508]: 2025-12-01 22:59:16.001 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:16 compute-0 podman[253645]: 2025-12-01 22:59:16.851558396 +0000 UTC m=+0.107672634 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 22:59:16 compute-0 podman[253644]: 2025-12-01 22:59:16.857007331 +0000 UTC m=+0.112605464 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 22:59:16 compute-0 podman[253646]: 2025-12-01 22:59:16.863877685 +0000 UTC m=+0.118922533 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0)
Dec  1 22:59:16 compute-0 podman[253643]: 2025-12-01 22:59:16.86967949 +0000 UTC m=+0.135543935 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 22:59:18 compute-0 nova_compute[189508]: 2025-12-01 22:59:18.373 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:18 compute-0 nova_compute[189508]: 2025-12-01 22:59:18.374 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:18 compute-0 nova_compute[189508]: 2025-12-01 22:59:18.375 189512 INFO nova.compute.manager [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Rebooting instance#033[00m
Dec  1 22:59:18 compute-0 nova_compute[189508]: 2025-12-01 22:59:18.396 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:59:18 compute-0 nova_compute[189508]: 2025-12-01 22:59:18.397 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquired lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:59:18 compute-0 nova_compute[189508]: 2025-12-01 22:59:18.397 189512 DEBUG nova.network.neutron [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 22:59:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:18.611 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:59:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:18.613 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 22:59:18 compute-0 nova_compute[189508]: 2025-12-01 22:59:18.613 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.561 189512 DEBUG nova.network.neutron [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updating instance_info_cache with network_info: [{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.578 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Releasing lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.581 189512 DEBUG nova.compute.manager [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.598 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:20 compute-0 kernel: tapa139ed27-b7 (unregistering): left promiscuous mode
Dec  1 22:59:20 compute-0 NetworkManager[56278]: <info>  [1764629960.7196] device (tapa139ed27-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:59:20 compute-0 ovn_controller[97770]: 2025-12-01T22:59:20Z|00135|binding|INFO|Releasing lport a139ed27-b785-495f-bc93-2f5daea46d42 from this chassis (sb_readonly=0)
Dec  1 22:59:20 compute-0 ovn_controller[97770]: 2025-12-01T22:59:20Z|00136|binding|INFO|Setting lport a139ed27-b785-495f-bc93-2f5daea46d42 down in Southbound
Dec  1 22:59:20 compute-0 ovn_controller[97770]: 2025-12-01T22:59:20Z|00137|binding|INFO|Removing iface tapa139ed27-b7 ovn-installed in OVS
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.749 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:20.753 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:3e:a0 10.100.0.6'], port_security=['fa:16:3e:b8:3e:a0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4d450663-4303-4535-bc1a-72996000c25a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c3d0516-109b-46fb-ab67-19206f614258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa4919c58ee4a458bdb25fd4271bfde', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd06e5c87-dfe8-4629-aafa-87299e309e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebd388b8-c29a-49dc-9a3f-96f8cde4cd01, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=a139ed27-b785-495f-bc93-2f5daea46d42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:59:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:20.758 106662 INFO neutron.agent.ovn.metadata.agent [-] Port a139ed27-b785-495f-bc93-2f5daea46d42 in datapath 7c3d0516-109b-46fb-ab67-19206f614258 unbound from our chassis#033[00m
Dec  1 22:59:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:20.759 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c3d0516-109b-46fb-ab67-19206f614258, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:59:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:20.762 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[69b8c318-b7b5-40dc-954a-62f28c4e9d89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:20 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:20.763 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 namespace which is not needed anymore#033[00m
Dec  1 22:59:20 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  1 22:59:20 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 42.180s CPU time.
Dec  1 22:59:20 compute-0 systemd-machined[155759]: Machine qemu-11-instance-0000000b terminated.
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.901 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.926 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:20 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [NOTICE]   (252947) : haproxy version is 2.8.14-c23fe91
Dec  1 22:59:20 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [NOTICE]   (252947) : path to executable is /usr/sbin/haproxy
Dec  1 22:59:20 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [WARNING]  (252947) : Exiting Master process...
Dec  1 22:59:20 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [WARNING]  (252947) : Exiting Master process...
Dec  1 22:59:20 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [ALERT]    (252947) : Current worker (252949) exited with code 143 (Terminated)
Dec  1 22:59:20 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[252943]: [WARNING]  (252947) : All workers exited. Exiting... (0)
Dec  1 22:59:20 compute-0 systemd[1]: libpod-356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126.scope: Deactivated successfully.
Dec  1 22:59:20 compute-0 podman[253747]: 2025-12-01 22:59:20.951603741 +0000 UTC m=+0.081084560 container died 356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.960 189512 INFO nova.virt.libvirt.driver [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance destroyed successfully.#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.961 189512 DEBUG nova.objects.instance [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'resources' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.987 189512 DEBUG nova.virt.libvirt.vif [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2091090341',display_name='tempest-ServerActionsTestJSON-server-2091090341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2091090341',id=11,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+fzJbRUs6xTpBTH6qdTI6/Z5W+mGfJgDYfAUhpF05jRUFQOpZmqCMJhmfo4TTDAEYfG1aq/+blNkmuIybaiFy/eDEp+yVFf0iSiXkStUapi+PgaOcCydfsaALgr/g66Q==',key_name='tempest-keypair-87244995',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:58:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa4919c58ee4a458bdb25fd4271bfde',ramdisk_id='',reservation_id='r-lf97gff3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1483688623',owner_user_name='tempest-ServerActionsTestJSON-1483688623-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:59:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f27393706a734cf3bee31de08a363c23',uuid=4d450663-4303-4535-bc1a-72996000c25a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.988 189512 DEBUG nova.network.os_vif_util [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converting VIF {"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.989 189512 DEBUG nova.network.os_vif_util [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.990 189512 DEBUG os_vif [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.992 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.993 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa139ed27-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.995 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126-userdata-shm.mount: Deactivated successfully.
Dec  1 22:59:20 compute-0 nova_compute[189508]: 2025-12-01 22:59:20.998 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9b1f32ae5fe73becfb1a61c774be9d4163a4bea30877e50defdd0f3200b176b-merged.mount: Deactivated successfully.
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.001 189512 INFO os_vif [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7')#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.009 189512 DEBUG nova.virt.libvirt.driver [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Start _get_guest_xml network_info=[{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 22:59:21 compute-0 podman[253747]: 2025-12-01 22:59:21.011673234 +0000 UTC m=+0.141154043 container cleanup 356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.016 189512 WARNING nova.virt.libvirt.driver [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 22:59:21 compute-0 systemd[1]: libpod-conmon-356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126.scope: Deactivated successfully.
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.025 189512 DEBUG nova.virt.libvirt.host [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.025 189512 DEBUG nova.virt.libvirt.host [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.030 189512 DEBUG nova.virt.libvirt.host [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.031 189512 DEBUG nova.virt.libvirt.host [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.031 189512 DEBUG nova.virt.libvirt.driver [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.032 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=74bb08bf-1799-4930-aad4-d505f26ff5f4,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.032 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.032 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.033 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.033 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.033 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.034 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.034 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.035 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.035 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.035 189512 DEBUG nova.virt.hardware [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.036 189512 DEBUG nova.objects.instance [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.056 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.083 189512 DEBUG nova.compute.manager [req-0736285d-2f91-4cd0-9b22-4561ee2ff750 req-a1712803-7907-4ee6-984b-21d376f5910a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-unplugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.083 189512 DEBUG oslo_concurrency.lockutils [req-0736285d-2f91-4cd0-9b22-4561ee2ff750 req-a1712803-7907-4ee6-984b-21d376f5910a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.084 189512 DEBUG oslo_concurrency.lockutils [req-0736285d-2f91-4cd0-9b22-4561ee2ff750 req-a1712803-7907-4ee6-984b-21d376f5910a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.084 189512 DEBUG oslo_concurrency.lockutils [req-0736285d-2f91-4cd0-9b22-4561ee2ff750 req-a1712803-7907-4ee6-984b-21d376f5910a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.084 189512 DEBUG nova.compute.manager [req-0736285d-2f91-4cd0-9b22-4561ee2ff750 req-a1712803-7907-4ee6-984b-21d376f5910a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] No waiting events found dispatching network-vif-unplugged-a139ed27-b785-495f-bc93-2f5daea46d42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.085 189512 WARNING nova.compute.manager [req-0736285d-2f91-4cd0-9b22-4561ee2ff750 req-a1712803-7907-4ee6-984b-21d376f5910a c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received unexpected event network-vif-unplugged-a139ed27-b785-495f-bc93-2f5daea46d42 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  1 22:59:21 compute-0 podman[253790]: 2025-12-01 22:59:21.085242991 +0000 UTC m=+0.046299584 container remove 356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.094 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d77f30-b75b-46d0-80b3-b1558e3ed9ae]: (4, ('Mon Dec  1 10:59:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 (356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126)\n356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126\nMon Dec  1 10:59:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 (356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126)\n356b8c99c7bbd4597ffae3f9d160debc887c24a6ae5cd52288470fc8bcfcd126\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.096 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[6309de46-9925-4bf5-b239-4dcd421f3ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.097 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c3d0516-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.099 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 kernel: tap7c3d0516-10: left promiscuous mode
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.103 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.115 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[3338fc0b-e047-4950-9239-d59780b64d98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.116 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.129 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[81c74197-428b-4b91-bf3d-1f1c86a8024b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.130 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c10a8cf7-d587-425f-bbf3-a49effb0635f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.132 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.config --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.133 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.134 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.134 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.135 189512 DEBUG nova.virt.libvirt.vif [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2091090341',display_name='tempest-ServerActionsTestJSON-server-2091090341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2091090341',id=11,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+fzJbRUs6xTpBTH6qdTI6/Z5W+mGfJgDYfAUhpF05jRUFQOpZmqCMJhmfo4TTDAEYfG1aq/+blNkmuIybaiFy/eDEp+yVFf0iSiXkStUapi+PgaOcCydfsaALgr/g66Q==',key_name='tempest-keypair-87244995',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:58:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa4919c58ee4a458bdb25fd4271bfde',ramdisk_id='',reservation_id='r-lf97gff3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1483688623',owner_user_name='tempest-ServerActionsTestJSON-1483688623-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:59:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f27393706a734cf3bee31de08a363c23',uuid=4d450663-4303-4535-bc1a-72996000c25a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.136 189512 DEBUG nova.network.os_vif_util [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converting VIF {"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.137 189512 DEBUG nova.network.os_vif_util [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.138 189512 DEBUG nova.objects.instance [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.146 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef01244-445a-465d-a2ff-7ed18fe656be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539640, 'reachable_time': 40285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253808, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.149 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.150 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[976a3872-7f94-411a-8564-37bb938d52e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c3d0516\x2d109b\x2d46fb\x2dab67\x2d19206f614258.mount: Deactivated successfully.
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.169 189512 DEBUG nova.virt.libvirt.driver [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] End _get_guest_xml xml=<domain type="kvm">
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <uuid>4d450663-4303-4535-bc1a-72996000c25a</uuid>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <name>instance-0000000b</name>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <metadata>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <nova:name>tempest-ServerActionsTestJSON-server-2091090341</nova:name>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 22:59:21</nova:creationTime>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:user uuid="f27393706a734cf3bee31de08a363c23">tempest-ServerActionsTestJSON-1483688623-project-member</nova:user>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:project uuid="faa4919c58ee4a458bdb25fd4271bfde">tempest-ServerActionsTestJSON-1483688623</nova:project>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="74bb08bf-1799-4930-aad4-d505f26ff5f4"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        <nova:port uuid="a139ed27-b785-495f-bc93-2f5daea46d42">
Dec  1 22:59:21 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  </metadata>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <system>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <entry name="serial">4d450663-4303-4535-bc1a-72996000c25a</entry>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <entry name="uuid">4d450663-4303-4535-bc1a-72996000c25a</entry>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </system>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <os>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  </os>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <features>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <apic/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  </features>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  </clock>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  </cpu>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  <devices>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk.config"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </disk>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:b8:3e:a0"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <target dev="tapa139ed27-b7"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </interface>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/console.log" append="off"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </serial>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <video>
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </video>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <input type="keyboard" bus="usb"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </rng>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 22:59:21 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 22:59:21 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 22:59:21 compute-0 nova_compute[189508]:  </devices>
Dec  1 22:59:21 compute-0 nova_compute[189508]: </domain>
Dec  1 22:59:21 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.170 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.226 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.227 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.286 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.288 189512 DEBUG nova.objects.instance [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.303 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.361 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.362 189512 DEBUG nova.virt.disk.api [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Checking if we can resize image /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.363 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.441 189512 DEBUG oslo_concurrency.processutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.443 189512 DEBUG nova.virt.disk.api [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Cannot resize image /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.443 189512 DEBUG nova.objects.instance [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'migration_context' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.459 189512 DEBUG nova.virt.libvirt.vif [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2091090341',display_name='tempest-ServerActionsTestJSON-server-2091090341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2091090341',id=11,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+fzJbRUs6xTpBTH6qdTI6/Z5W+mGfJgDYfAUhpF05jRUFQOpZmqCMJhmfo4TTDAEYfG1aq/+blNkmuIybaiFy/eDEp+yVFf0iSiXkStUapi+PgaOcCydfsaALgr/g66Q==',key_name='tempest-keypair-87244995',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:58:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='faa4919c58ee4a458bdb25fd4271bfde',ramdisk_id='',reservation_id='r-lf97gff3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1483688623',owner_user_name='tempest-ServerActionsTestJSON-1483688623-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T22:59:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f27393706a734cf3bee31de08a363c23',uuid=4d450663-4303-4535-bc1a-72996000c25a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.459 189512 DEBUG nova.network.os_vif_util [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converting VIF {"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.460 189512 DEBUG nova.network.os_vif_util [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.461 189512 DEBUG os_vif [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.462 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.462 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.463 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.466 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.466 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa139ed27-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.467 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa139ed27-b7, col_values=(('external_ids', {'iface-id': 'a139ed27-b785-495f-bc93-2f5daea46d42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:3e:a0', 'vm-uuid': '4d450663-4303-4535-bc1a-72996000c25a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.469 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 NetworkManager[56278]: <info>  [1764629961.4704] manager: (tapa139ed27-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.471 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.480 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.481 189512 INFO os_vif [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7')#033[00m
Dec  1 22:59:21 compute-0 kernel: tapa139ed27-b7: entered promiscuous mode
Dec  1 22:59:21 compute-0 NetworkManager[56278]: <info>  [1764629961.5695] manager: (tapa139ed27-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec  1 22:59:21 compute-0 ovn_controller[97770]: 2025-12-01T22:59:21Z|00138|binding|INFO|Claiming lport a139ed27-b785-495f-bc93-2f5daea46d42 for this chassis.
Dec  1 22:59:21 compute-0 ovn_controller[97770]: 2025-12-01T22:59:21Z|00139|binding|INFO|a139ed27-b785-495f-bc93-2f5daea46d42: Claiming fa:16:3e:b8:3e:a0 10.100.0.6
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.569 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 systemd-udevd[253727]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 22:59:21 compute-0 NetworkManager[56278]: <info>  [1764629961.5905] device (tapa139ed27-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.588 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:3e:a0 10.100.0.6'], port_security=['fa:16:3e:b8:3e:a0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4d450663-4303-4535-bc1a-72996000c25a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c3d0516-109b-46fb-ab67-19206f614258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa4919c58ee4a458bdb25fd4271bfde', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd06e5c87-dfe8-4629-aafa-87299e309e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebd388b8-c29a-49dc-9a3f-96f8cde4cd01, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=a139ed27-b785-495f-bc93-2f5daea46d42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.589 106662 INFO neutron.agent.ovn.metadata.agent [-] Port a139ed27-b785-495f-bc93-2f5daea46d42 in datapath 7c3d0516-109b-46fb-ab67-19206f614258 bound to our chassis#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.591 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c3d0516-109b-46fb-ab67-19206f614258#033[00m
Dec  1 22:59:21 compute-0 ovn_controller[97770]: 2025-12-01T22:59:21Z|00140|binding|INFO|Setting lport a139ed27-b785-495f-bc93-2f5daea46d42 ovn-installed in OVS
Dec  1 22:59:21 compute-0 ovn_controller[97770]: 2025-12-01T22:59:21Z|00141|binding|INFO|Setting lport a139ed27-b785-495f-bc93-2f5daea46d42 up in Southbound
Dec  1 22:59:21 compute-0 NetworkManager[56278]: <info>  [1764629961.5971] device (tapa139ed27-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.597 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.600 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.603 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc8a6fd-e92e-4241-a0d3-8965a56b9c6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.604 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c3d0516-11 in ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.606 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c3d0516-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.606 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[8021eda5-2de0-4f43-8185-b781c67baf36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.608 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[dae6393e-cfea-44f9-9308-3ddb102c0220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.609 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.631 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[407b17ee-ea22-404c-8496-2a41c07f6d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 systemd-machined[155759]: New machine qemu-14-instance-0000000b.
Dec  1 22:59:21 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000b.
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.660 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[84fdf9ac-eae2-4a11-ba4d-2808eb42c2af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.716 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[011534af-a055-40e8-ad1e-418ae7df6df0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.726 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e4eb2f-8e8b-4e9f-be4c-43b4af0f4884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 NetworkManager[56278]: <info>  [1764629961.7370] manager: (tap7c3d0516-10): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.768 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[64438046-98e1-40b3-9086-491f7b0a15b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.771 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf69a8e-ef64-4452-9afb-bcc48b3263fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 NetworkManager[56278]: <info>  [1764629961.8051] device (tap7c3d0516-10): carrier: link connected
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.811 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ae36f36f-4dbf-466e-8e94-ace4c8891cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.839 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[030a9140-691d-44e1-be2e-91ace58ba259]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c3d0516-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:2b:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547137, 'reachable_time': 32046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253868, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.856 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[541bfc0c-69e1-42b9-937e-73f5f39ff562]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:2bc5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547137, 'tstamp': 547137}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253869, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.871 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[c62bac21-97b2-4475-b8be-5c5325e55098]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c3d0516-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:2b:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547137, 'reachable_time': 32046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253870, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.906 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b9656951-205f-4208-8c07-ec63b734a27b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.972 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[74ea3566-1271-47e0-b49e-64d884e68db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.975 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c3d0516-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.976 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.976 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c3d0516-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.978 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 kernel: tap7c3d0516-10: entered promiscuous mode
Dec  1 22:59:21 compute-0 NetworkManager[56278]: <info>  [1764629961.9791] manager: (tap7c3d0516-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.982 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:21.984 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c3d0516-10, col_values=(('external_ids', {'iface-id': '59cd1803-8a52-4381-bb39-d2aa1220acc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:21 compute-0 nova_compute[189508]: 2025-12-01 22:59:21.985 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:21 compute-0 ovn_controller[97770]: 2025-12-01T22:59:21Z|00142|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.000 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.001 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:22.002 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c3d0516-109b-46fb-ab67-19206f614258.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c3d0516-109b-46fb-ab67-19206f614258.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:22.003 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b7efcc87-f66e-4f4e-9a99-f045cd99b5aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:22.004 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: global
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-7c3d0516-109b-46fb-ab67-19206f614258
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/7c3d0516-109b-46fb-ab67-19206f614258.pid.haproxy
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: 
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID 7c3d0516-109b-46fb-ab67-19206f614258
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 22:59:22 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:22.005 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'env', 'PROCESS_TAG=haproxy-7c3d0516-109b-46fb-ab67-19206f614258', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c3d0516-109b-46fb-ab67-19206f614258.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.199 189512 DEBUG nova.virt.libvirt.host [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Removed pending event for 4d450663-4303-4535-bc1a-72996000c25a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.200 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629962.1985009, 4d450663-4303-4535-bc1a-72996000c25a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.200 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] VM Resumed (Lifecycle Event)#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.205 189512 DEBUG nova.compute.manager [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.211 189512 INFO nova.virt.libvirt.driver [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance rebooted successfully.#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.211 189512 DEBUG nova.compute.manager [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.232 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.246 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.294 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.294 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764629962.2027478, 4d450663-4303-4535-bc1a-72996000c25a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.294 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] VM Started (Lifecycle Event)#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.307 189512 DEBUG oslo_concurrency.lockutils [None req-bde9a23d-3f67-42f2-9358-a02055743b31 f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.330 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:59:22 compute-0 nova_compute[189508]: 2025-12-01 22:59:22.336 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 22:59:22 compute-0 podman[253908]: 2025-12-01 22:59:22.485506215 +0000 UTC m=+0.071625502 container create 7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 22:59:22 compute-0 podman[253908]: 2025-12-01 22:59:22.43912672 +0000 UTC m=+0.025246057 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 22:59:22 compute-0 systemd[1]: Started libpod-conmon-7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9.scope.
Dec  1 22:59:22 compute-0 systemd[1]: Started libcrun container.
Dec  1 22:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf4f6bf767e255704fb688284d252be9e7f43de80ef44c52678cab3cf827ed95/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 22:59:22 compute-0 podman[253908]: 2025-12-01 22:59:22.638218105 +0000 UTC m=+0.224337412 container init 7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  1 22:59:22 compute-0 podman[253908]: 2025-12-01 22:59:22.645667987 +0000 UTC m=+0.231787274 container start 7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 22:59:22 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[253921]: [NOTICE]   (253925) : New worker (253927) forked
Dec  1 22:59:22 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[253921]: [NOTICE]   (253925) : Loading success.
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.188 189512 DEBUG nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.188 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.188 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.189 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.189 189512 DEBUG nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] No waiting events found dispatching network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.189 189512 WARNING nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received unexpected event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.190 189512 DEBUG nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.190 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.190 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.191 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.191 189512 DEBUG nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] No waiting events found dispatching network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.191 189512 WARNING nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received unexpected event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.192 189512 DEBUG nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.192 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.192 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.192 189512 DEBUG oslo_concurrency.lockutils [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.193 189512 DEBUG nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] No waiting events found dispatching network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:23 compute-0 nova_compute[189508]: 2025-12-01 22:59:23.193 189512 WARNING nova.compute.manager [req-41e85abe-143d-43cd-af17-4db0879d0aaf req-0d167f57-d72e-4134-a7fd-42ba9e7c64ac c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received unexpected event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 for instance with vm_state active and task_state None.#033[00m
Dec  1 22:59:25 compute-0 nova_compute[189508]: 2025-12-01 22:59:25.597 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:26 compute-0 nova_compute[189508]: 2025-12-01 22:59:26.471 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:26 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:26.616 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:28 compute-0 ovn_controller[97770]: 2025-12-01T22:59:28Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:2b:96 10.100.0.7
Dec  1 22:59:28 compute-0 ovn_controller[97770]: 2025-12-01T22:59:28Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:2b:96 10.100.0.7
Dec  1 22:59:29 compute-0 podman[203693]: time="2025-12-01T22:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:59:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Dec  1 22:59:29 compute-0 podman[203693]: @ - - [01/Dec/2025:22:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5738 "" "Go-http-client/1.1"
Dec  1 22:59:30 compute-0 nova_compute[189508]: 2025-12-01 22:59:30.599 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:31 compute-0 openstack_network_exporter[205887]: ERROR   22:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:59:31 compute-0 openstack_network_exporter[205887]: ERROR   22:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 22:59:31 compute-0 openstack_network_exporter[205887]: ERROR   22:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 22:59:31 compute-0 openstack_network_exporter[205887]: ERROR   22:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 22:59:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:59:31 compute-0 openstack_network_exporter[205887]: ERROR   22:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 22:59:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:31.449 106765 DEBUG eventlet.wsgi.server [-] (106765) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:31.451 106765 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: Accept: */*#015
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: Connection: close#015
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: Content-Type: text/plain#015
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: Host: 169.254.169.254#015
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: User-Agent: curl/7.84.0#015
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: X-Forwarded-For: 10.100.0.8#015
Dec  1 22:59:31 compute-0 ovn_metadata_agent[106657]: X-Ovn-Network-Id: 27ca9db6-6725-47fe-b0f9-957bed1ac95a __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  1 22:59:31 compute-0 nova_compute[189508]: 2025-12-01 22:59:31.473 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:31 compute-0 podman[253962]: 2025-12-01 22:59:31.828032801 +0000 UTC m=+0.105465371 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:33.442 106765 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:33.443 106765 INFO eventlet.wsgi.server [-] 10.100.0.8,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.9922550#033[00m
Dec  1 22:59:33 compute-0 haproxy-metadata-proxy-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253209]: 10.100.0.8:42504 [01/Dec/2025:22:59:31.448] listener listener/metadata 0/0/0/1995/1995 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:33.576 106765 DEBUG eventlet.wsgi.server [-] (106765) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:33.576 106765 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: Accept: */*#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: Connection: close#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: Content-Length: 100#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: Content-Type: application/x-www-form-urlencoded#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: Host: 169.254.169.254#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: User-Agent: curl/7.84.0#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: X-Forwarded-For: 10.100.0.8#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: X-Ovn-Network-Id: 27ca9db6-6725-47fe-b0f9-957bed1ac95a#015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: #015
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:33.858 106765 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  1 22:59:33 compute-0 haproxy-metadata-proxy-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253209]: 10.100.0.8:51236 [01/Dec/2025:22:59:33.574] listener listener/metadata 0/0/0/285/285 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  1 22:59:33 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:33.859 106765 INFO eventlet.wsgi.server [-] 10.100.0.8,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2825987#033[00m
Dec  1 22:59:34 compute-0 nova_compute[189508]: 2025-12-01 22:59:34.649 189512 INFO nova.compute.manager [None req-cbeca752-746c-4279-b328-0191943506d5 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Get console output#033[00m
Dec  1 22:59:34 compute-0 nova_compute[189508]: 2025-12-01 22:59:34.659 239842 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 22:59:34 compute-0 podman[253983]: 2025-12-01 22:59:34.830353832 +0000 UTC m=+0.100321626 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.275 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.276 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.276 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.284 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance d35b993a-ba2a-478d-b7f6-c7dfba36d402 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.286 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/d35b993a-ba2a-478d-b7f6-c7dfba36d402 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.601 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.615 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.616 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.616 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.616 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.617 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.618 189512 INFO nova.compute.manager [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Terminating instance#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.619 189512 DEBUG nova.compute.manager [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:59:35 compute-0 kernel: tap92958b22-0b (unregistering): left promiscuous mode
Dec  1 22:59:35 compute-0 NetworkManager[56278]: <info>  [1764629975.6734] device (tap92958b22-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:59:35 compute-0 ovn_controller[97770]: 2025-12-01T22:59:35Z|00143|binding|INFO|Releasing lport 92958b22-0bb7-41c6-9850-61c81cea56d8 from this chassis (sb_readonly=0)
Dec  1 22:59:35 compute-0 ovn_controller[97770]: 2025-12-01T22:59:35Z|00144|binding|INFO|Setting lport 92958b22-0bb7-41c6-9850-61c81cea56d8 down in Southbound
Dec  1 22:59:35 compute-0 ovn_controller[97770]: 2025-12-01T22:59:35Z|00145|binding|INFO|Removing iface tap92958b22-0b ovn-installed in OVS
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.686 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.692 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.694 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:2b:96 10.100.0.7'], port_security=['fa:16:3e:5c:2b:96 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a4f50c75-4c0a-4222-a614-20d83eba9a2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-513808ab-c863-4790-88e3-b64040a0ed8a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43a7ae6a25114fd199de68dfe3d3217b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4de5fd7e-e0c4-4a2c-a479-6e7aa60056a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e643dba6-de01-4938-9750-33d8ce8dfa77, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=92958b22-0bb7-41c6-9850-61c81cea56d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.696 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 92958b22-0bb7-41c6-9850-61c81cea56d8 in datapath 513808ab-c863-4790-88e3-b64040a0ed8a unbound from our chassis#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.700 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.701 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 513808ab-c863-4790-88e3-b64040a0ed8a#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.722 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[9610d0c9-6204-4c8e-8cdb-61ac83213d3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:35 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  1 22:59:35 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 37.632s CPU time.
Dec  1 22:59:35 compute-0 systemd-machined[155759]: Machine qemu-13-instance-0000000d terminated.
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.765 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[33830e47-b2e1-450e-afc0-6e92ad234e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.769 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6ed193-4b54-4b8b-a03b-e60423d3bad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.806 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[cee9193d-3620-4c59-aaeb-bf05ce50ffec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.827 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[98d57313-b731-4779-b37f-2eca84448a74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap513808ab-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537641, 'reachable_time': 30370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254015, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.853 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0a0a6d-ed81-441a-90ac-18942240862b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap513808ab-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537656, 'tstamp': 537656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254017, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap513808ab-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537660, 'tstamp': 537660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254017, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.866 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513808ab-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.868 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.876 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.877 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap513808ab-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.877 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.877 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap513808ab-c0, col_values=(('external_ids', {'iface-id': 'c21d900e-9830-49c7-a1df-ef9de7493e3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:35 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:35.878 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.919 189512 INFO nova.virt.libvirt.driver [-] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Instance destroyed successfully.#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.921 189512 DEBUG nova.objects.instance [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lazy-loading 'resources' on Instance uuid a4f50c75-4c0a-4222-a614-20d83eba9a2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.954 189512 DEBUG nova.virt.libvirt.vif [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:58:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1443491023',display_name='tempest-TestNetworkBasicOps-server-1443491023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1443491023',id=13,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO5/vETY/it++hsDSzhTJNzHqx2Ih5naRH2QDqJ/NpOo3aoxUADDOFLjhO4K6mh2gX88uJUq6wuasKMqVILKGhtLSRmx2p7LIM/ZzaRAEfijcPif/+1DksRYivz9VOHF8g==',key_name='tempest-TestNetworkBasicOps-940390349',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:58:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43a7ae6a25114fd199de68dfe3d3217b',ramdisk_id='',reservation_id='r-2n91xcbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1418827846',owner_user_name='tempest-TestNetworkBasicOps-1418827846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:58:53Z,user_data=None,user_id='786ce878f1d2401ab2375f67e5ebd78b',uuid=a4f50c75-4c0a-4222-a614-20d83eba9a2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.954 189512 DEBUG nova.network.os_vif_util [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converting VIF {"id": "92958b22-0bb7-41c6-9850-61c81cea56d8", "address": "fa:16:3e:5c:2b:96", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92958b22-0b", "ovs_interfaceid": "92958b22-0bb7-41c6-9850-61c81cea56d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.955 189512 DEBUG nova.network.os_vif_util [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:2b:96,bridge_name='br-int',has_traffic_filtering=True,id=92958b22-0bb7-41c6-9850-61c81cea56d8,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92958b22-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.955 189512 DEBUG os_vif [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:2b:96,bridge_name='br-int',has_traffic_filtering=True,id=92958b22-0bb7-41c6-9850-61c81cea56d8,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92958b22-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.957 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.957 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92958b22-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.959 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.960 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.963 189512 INFO os_vif [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:2b:96,bridge_name='br-int',has_traffic_filtering=True,id=92958b22-0bb7-41c6-9850-61c81cea56d8,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92958b22-0b')#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.964 189512 INFO nova.virt.libvirt.driver [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Deleting instance files /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f_del#033[00m
Dec  1 22:59:35 compute-0 nova_compute[189508]: 2025-12-01 22:59:35.964 189512 INFO nova.virt.libvirt.driver [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Deletion of /var/lib/nova/instances/a4f50c75-4c0a-4222-a614-20d83eba9a2f_del complete#033[00m
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.989 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2081 Content-Type: application/json Date: Mon, 01 Dec 2025 22:59:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-19de6e28-5cc8-4887-90e9-64291cce8ca7 x-openstack-request-id: req-19de6e28-5cc8-4887-90e9-64291cce8ca7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.990 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "d35b993a-ba2a-478d-b7f6-c7dfba36d402", "name": "tempest-TestServerBasicOps-server-158689313", "status": "ACTIVE", "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "user_id": "376b22ff1d4b4216a3013dc170064403", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "24b09bdf60478342f25b23a288e4fb1c89f1237d1a3a8d04a7bdd332", "image": {"id": "74bb08bf-1799-4930-aad4-d505f26ff5f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/74bb08bf-1799-4930-aad4-d505f26ff5f4"}]}, "flavor": {"id": "2e42a55e-71e2-4041-8ca2-725d63f058bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2e42a55e-71e2-4041-8ca2-725d63f058bf"}]}, "created": "2025-12-01T22:58:11Z", "updated": "2025-12-01T22:59:33Z", "addresses": {"tempest-TestServerBasicOps-674189106-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bc:78:9d"}, {"version": 4, "addr": "192.168.122.177", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bc:78:9d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/d35b993a-ba2a-478d-b7f6-c7dfba36d402"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/d35b993a-ba2a-478d-b7f6-c7dfba36d402"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-553115585", "OS-SRV-USG:launched_at": "2025-12-01T22:58:22.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--132062715"}, {"name": "tempest-secgroup-smoke-1636600951"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.991 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/d35b993a-ba2a-478d-b7f6-c7dfba36d402 used request id req-19de6e28-5cc8-4887-90e9-64291cce8ca7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.993 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd35b993a-ba2a-478d-b7f6-c7dfba36d402', 'name': 'tempest-TestServerBasicOps-server-158689313', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '5d415954cbc84272b9bc26d3d8a3a591', 'user_id': '376b22ff1d4b4216a3013dc170064403', 'hostId': '24b09bdf60478342f25b23a288e4fb1c89f1237d1a3a8d04a7bdd332', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.996 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 6a2b0a2e-1144-4264-917f-086024e18bed from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:59:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:35.997 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/6a2b0a2e-1144-4264-917f-086024e18bed -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:59:36 compute-0 podman[254033]: 2025-12-01 22:59:36.010495385 +0000 UTC m=+0.085815825 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.057 189512 INFO nova.compute.manager [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.058 189512 DEBUG oslo.service.loopingcall [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.060 189512 DEBUG nova.compute.manager [-] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.060 189512 DEBUG nova.network.neutron [-] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.070 189512 DEBUG nova.compute.manager [req-13dffb98-066e-4d84-8260-17546ae9fdf1 req-94c6f545-b3f9-4d83-8875-7deef8c273b5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-vif-unplugged-92958b22-0bb7-41c6-9850-61c81cea56d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.071 189512 DEBUG oslo_concurrency.lockutils [req-13dffb98-066e-4d84-8260-17546ae9fdf1 req-94c6f545-b3f9-4d83-8875-7deef8c273b5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.072 189512 DEBUG oslo_concurrency.lockutils [req-13dffb98-066e-4d84-8260-17546ae9fdf1 req-94c6f545-b3f9-4d83-8875-7deef8c273b5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.073 189512 DEBUG oslo_concurrency.lockutils [req-13dffb98-066e-4d84-8260-17546ae9fdf1 req-94c6f545-b3f9-4d83-8875-7deef8c273b5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.073 189512 DEBUG nova.compute.manager [req-13dffb98-066e-4d84-8260-17546ae9fdf1 req-94c6f545-b3f9-4d83-8875-7deef8c273b5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] No waiting events found dispatching network-vif-unplugged-92958b22-0bb7-41c6-9850-61c81cea56d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.074 189512 DEBUG nova.compute.manager [req-13dffb98-066e-4d84-8260-17546ae9fdf1 req-94c6f545-b3f9-4d83-8875-7deef8c273b5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-vif-unplugged-92958b22-0bb7-41c6-9850-61c81cea56d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.259 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.260 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.260 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.261 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.261 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.262 189512 INFO nova.compute.manager [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Terminating instance#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.264 189512 DEBUG nova.compute.manager [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:59:36 compute-0 kernel: tapfdb7b491-6f (unregistering): left promiscuous mode
Dec  1 22:59:36 compute-0 NetworkManager[56278]: <info>  [1764629976.3039] device (tapfdb7b491-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.312 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.315 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 ovn_controller[97770]: 2025-12-01T22:59:36Z|00146|binding|INFO|Releasing lport fdb7b491-6ff3-42d8-ba52-cdb8d280c17b from this chassis (sb_readonly=0)
Dec  1 22:59:36 compute-0 ovn_controller[97770]: 2025-12-01T22:59:36Z|00147|binding|INFO|Setting lport fdb7b491-6ff3-42d8-ba52-cdb8d280c17b down in Southbound
Dec  1 22:59:36 compute-0 ovn_controller[97770]: 2025-12-01T22:59:36Z|00148|binding|INFO|Removing iface tapfdb7b491-6f ovn-installed in OVS
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.321 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:78:9d 10.100.0.8'], port_security=['fa:16:3e:bc:78:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd35b993a-ba2a-478d-b7f6-c7dfba36d402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d415954cbc84272b9bc26d3d8a3a591', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0f011a8-001b-403a-aba7-ce71ccfb1571 f3fb426f-e7e3-4d56-8f7b-ee20f8ed572d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5337bcc8-8621-410a-b025-ec1f57d87929, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.322 106662 INFO neutron.agent.ovn.metadata.agent [-] Port fdb7b491-6ff3-42d8-ba52-cdb8d280c17b in datapath 27ca9db6-6725-47fe-b0f9-957bed1ac95a unbound from our chassis#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.324 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27ca9db6-6725-47fe-b0f9-957bed1ac95a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.331 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[91aca0cb-2144-4d37-b6f6-6e3cb0aeefb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.331 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a namespace which is not needed anymore#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.332 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  1 22:59:36 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 41.685s CPU time.
Dec  1 22:59:36 compute-0 systemd-machined[155759]: Machine qemu-12-instance-0000000c terminated.
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.490 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.497 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253203]: [NOTICE]   (253207) : haproxy version is 2.8.14-c23fe91
Dec  1 22:59:36 compute-0 neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253203]: [NOTICE]   (253207) : path to executable is /usr/sbin/haproxy
Dec  1 22:59:36 compute-0 neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253203]: [WARNING]  (253207) : Exiting Master process...
Dec  1 22:59:36 compute-0 neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253203]: [ALERT]    (253207) : Current worker (253209) exited with code 143 (Terminated)
Dec  1 22:59:36 compute-0 neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a[253203]: [WARNING]  (253207) : All workers exited. Exiting... (0)
Dec  1 22:59:36 compute-0 systemd[1]: libpod-57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c.scope: Deactivated successfully.
Dec  1 22:59:36 compute-0 conmon[253203]: conmon 57a037d09b6f5b1992e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c.scope/container/memory.events
Dec  1 22:59:36 compute-0 podman[254075]: 2025-12-01 22:59:36.541413199 +0000 UTC m=+0.067651419 container stop 57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.548 189512 INFO nova.virt.libvirt.driver [-] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Instance destroyed successfully.#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.548 189512 DEBUG nova.objects.instance [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lazy-loading 'resources' on Instance uuid d35b993a-ba2a-478d-b7f6-c7dfba36d402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.562 189512 DEBUG nova.virt.libvirt.vif [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:58:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-158689313',display_name='tempest-TestServerBasicOps-server-158689313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-158689313',id=12,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdLU2XWR0D9/TV5zDcfDyB8kEnTGGiGQva7AuOv6B+LBv56eiAYC8WmrwJdgsugY1wRFkht/o9yr8+gyoh/ocnB+FJdcaoz459gvb4M95yZUZ9pYKJl6veahcNY5ap2bg==',key_name='tempest-TestServerBasicOps-553115585',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:58:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d415954cbc84272b9bc26d3d8a3a591',ramdisk_id='',reservation_id='r-ho1w8rch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-708531377',owner_user_name='tempest-TestServerBasicOps-708531377-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:59:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='376b22ff1d4b4216a3013dc170064403',uuid=d35b993a-ba2a-478d-b7f6-c7dfba36d402,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.563 189512 DEBUG nova.network.os_vif_util [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Converting VIF {"id": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "address": "fa:16:3e:bc:78:9d", "network": {"id": "27ca9db6-6725-47fe-b0f9-957bed1ac95a", "bridge": "br-int", "label": "tempest-TestServerBasicOps-674189106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d415954cbc84272b9bc26d3d8a3a591", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb7b491-6f", "ovs_interfaceid": "fdb7b491-6ff3-42d8-ba52-cdb8d280c17b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.564 189512 DEBUG nova.network.os_vif_util [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b,network=Network(27ca9db6-6725-47fe-b0f9-957bed1ac95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb7b491-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.565 189512 DEBUG os_vif [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b,network=Network(27ca9db6-6725-47fe-b0f9-957bed1ac95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb7b491-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.567 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.567 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb7b491-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.570 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.572 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:59:36 compute-0 podman[254075]: 2025-12-01 22:59:36.572517081 +0000 UTC m=+0.098755331 container died 57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.574 189512 INFO os_vif [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=fdb7b491-6ff3-42d8-ba52-cdb8d280c17b,network=Network(27ca9db6-6725-47fe-b0f9-957bed1ac95a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb7b491-6f')#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.576 189512 INFO nova.virt.libvirt.driver [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Deleting instance files /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402_del#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.577 189512 INFO nova.virt.libvirt.driver [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Deletion of /var/lib/nova/instances/d35b993a-ba2a-478d-b7f6-c7dfba36d402_del complete#033[00m
Dec  1 22:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c-userdata-shm.mount: Deactivated successfully.
Dec  1 22:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdc25ab0297a68887887c490478d614effa6af2a600c832f09635911f4a9a599-merged.mount: Deactivated successfully.
Dec  1 22:59:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:36.627 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1853 Content-Type: application/json Date: Mon, 01 Dec 2025 22:59:36 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-04851f43-8e1d-4eca-a79f-ff6d96b1c6c5 x-openstack-request-id: req-04851f43-8e1d-4eca-a79f-ff6d96b1c6c5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:59:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:36.627 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "6a2b0a2e-1144-4264-917f-086024e18bed", "name": "tempest-TestNetworkBasicOps-server-1960241782", "status": "ACTIVE", "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "user_id": "786ce878f1d2401ab2375f67e5ebd78b", "metadata": {}, "hostId": "f120506d8358cd760ce8cf636bea7a059b83a9a215da0fe7652424d7", "image": {"id": "74bb08bf-1799-4930-aad4-d505f26ff5f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/74bb08bf-1799-4930-aad4-d505f26ff5f4"}]}, "flavor": {"id": "2e42a55e-71e2-4041-8ca2-725d63f058bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2e42a55e-71e2-4041-8ca2-725d63f058bf"}]}, "created": "2025-12-01T22:57:36Z", "updated": "2025-12-01T22:57:55Z", "addresses": {"tempest-network-smoke--740211687": [{"version": 4, "addr": "10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:67:9d:a6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/6a2b0a2e-1144-4264-917f-086024e18bed"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/6a2b0a2e-1144-4264-917f-086024e18bed"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-894511931", "OS-SRV-USG:launched_at": "2025-12-01T22:57:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1727728887"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:59:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:36.627 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/6a2b0a2e-1144-4264-917f-086024e18bed used request id req-04851f43-8e1d-4eca-a79f-ff6d96b1c6c5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.629 189512 INFO nova.compute.manager [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:59:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:36.630 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '6a2b0a2e-1144-4264-917f-086024e18bed', 'name': 'tempest-TestNetworkBasicOps-server-1960241782', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '43a7ae6a25114fd199de68dfe3d3217b', 'user_id': '786ce878f1d2401ab2375f67e5ebd78b', 'hostId': 'f120506d8358cd760ce8cf636bea7a059b83a9a215da0fe7652424d7', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.629 189512 DEBUG oslo.service.loopingcall [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.630 189512 DEBUG nova.compute.manager [-] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.631 189512 DEBUG nova.network.neutron [-] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:59:36 compute-0 podman[254075]: 2025-12-01 22:59:36.632125621 +0000 UTC m=+0.158363851 container cleanup 57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  1 22:59:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:36.632 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4d450663-4303-4535-bc1a-72996000c25a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 22:59:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:36.634 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4d450663-4303-4535-bc1a-72996000c25a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 22:59:36 compute-0 systemd[1]: libpod-conmon-57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c.scope: Deactivated successfully.
Dec  1 22:59:36 compute-0 podman[254119]: 2025-12-01 22:59:36.733505086 +0000 UTC m=+0.065824738 container remove 57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.749 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[34896111-c179-46cf-806b-995c95532c06]: (4, ('Mon Dec  1 10:59:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a (57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c)\n57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c\nMon Dec  1 10:59:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a (57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c)\n57a037d09b6f5b1992e26d5b61afee24927b781eb3023ee57bfcf75f1b5ee09c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.752 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e802b415-0e87-4bf6-bf74-a0c3ea6fbc55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.753 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27ca9db6-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.755 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 kernel: tap27ca9db6-60: left promiscuous mode
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.761 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[2262529e-bb28-46d9-a8a9-6c9bbc1d12a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:36 compute-0 nova_compute[189508]: 2025-12-01 22:59:36.772 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.786 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[37f5d50e-8aa7-4fba-8db7-f7e5608f164c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.789 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea07ada-1545-44d3-95a3-4266f4197716]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.811 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f641bdd6-b81b-40d4-94fc-5e865e082192]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541109, 'reachable_time': 31226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254132, 'error': None, 'target': 'ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d27ca9db6\x2d6725\x2d47fe\x2db0f9\x2d957bed1ac95a.mount: Deactivated successfully.
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.819 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27ca9db6-6725-47fe-b0f9-957bed1ac95a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:59:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:36.819 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[65ffde98-24ae-4a33-be51-649d4409d55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.097 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1978 Content-Type: application/json Date: Mon, 01 Dec 2025 22:59:36 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3dbc0588-e3b5-4cc5-a051-b132f3564aa1 x-openstack-request-id: req-3dbc0588-e3b5-4cc5-a051-b132f3564aa1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.097 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4d450663-4303-4535-bc1a-72996000c25a", "name": "tempest-ServerActionsTestJSON-server-2091090341", "status": "ACTIVE", "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "user_id": "f27393706a734cf3bee31de08a363c23", "metadata": {}, "hostId": "be0dce7bad92de6c11f57eea75ca243a77a251fca73d66eb1713e964", "image": {"id": "74bb08bf-1799-4930-aad4-d505f26ff5f4", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/74bb08bf-1799-4930-aad4-d505f26ff5f4"}]}, "flavor": {"id": "2e42a55e-71e2-4041-8ca2-725d63f058bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2e42a55e-71e2-4041-8ca2-725d63f058bf"}]}, "created": "2025-12-01T22:57:55Z", "updated": "2025-12-01T22:59:22Z", "addresses": {"tempest-ServerActionsTestJSON-862758432-network": [{"version": 4, "addr": "10.100.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b8:3e:a0"}, {"version": 4, "addr": "192.168.122.221", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b8:3e:a0"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4d450663-4303-4535-bc1a-72996000c25a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4d450663-4303-4535-bc1a-72996000c25a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-87244995", "OS-SRV-USG:launched_at": "2025-12-01T22:58:07.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1136248795"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.097 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4d450663-4303-4535-bc1a-72996000c25a used request id req-3dbc0588-e3b5-4cc5-a051-b132f3564aa1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.099 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4d450663-4303-4535-bc1a-72996000c25a', 'name': 'tempest-ServerActionsTestJSON-server-2091090341', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '74bb08bf-1799-4930-aad4-d505f26ff5f4'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'faa4919c58ee4a458bdb25fd4271bfde', 'user_id': 'f27393706a734cf3bee31de08a363c23', 'hostId': 'be0dce7bad92de6c11f57eea75ca243a77a251fca73d66eb1713e964', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'a4f50c75-4c0a-4222-a614-20d83eba9a2f' (instance-0000000d)
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'a4f50c75-4c0a-4222-a614-20d83eba9a2f' (instance-0000000d)
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.101 14 ERROR ceilometer.compute.virt.libvirt.utils [-] Fail to get domain uuid a4f50c75-4c0a-4222-a614-20d83eba9a2f metadata, libvirtError: Domain not found: no domain with matching uuid 'a4f50c75-4c0a-4222-a614-20d83eba9a2f' (instance-0000000d)
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.101 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T22:59:37.102364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.104 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.108 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 6a2b0a2e-1144-4264-917f-086024e18bed / tap02f1eac6-30 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.108 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.112 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4d450663-4303-4535-bc1a-72996000c25a / tapa139ed27-b7 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.112 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.114 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.114 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.114 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T22:59:37.114941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.116 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.116 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.117 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T22:59:37.118423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.119 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.120 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.120 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.121 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.123 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T22:59:37.122037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.141 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.142 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.159 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.160 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T22:59:37.162161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.163 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.200 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.read.bytes volume: 31058432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.201 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.243 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.244 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.245 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.245 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.245 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.246 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.246 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.247 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T22:59:37.246655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.248 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.248 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.read.latency volume: 605630134 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.249 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.read.latency volume: 60447585 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.250 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.read.latency volume: 535662571 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.251 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.read.latency volume: 741321 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.252 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.252 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.253 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T22:59:37.253464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.255 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.255 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.256 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.256 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.allocation volume: 30875648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.257 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.258 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.258 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T22:59:37.259830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.261 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.261 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.read.requests volume: 1118 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.262 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.262 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.263 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.264 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T22:59:37.265722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.267 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.267 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.268 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.269 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.269 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.270 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.271 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.271 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T22:59:37.271753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.273 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.274 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.write.bytes volume: 73097216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.274 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.275 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.275 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.276 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.276 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.277 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T22:59:37.277714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.279 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.279 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.write.latency volume: 3729415603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.280 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.280 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.281 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.282 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T22:59:37.283734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.285 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.306 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/cpu volume: 34370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.330 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/cpu volume: 14470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.332 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T22:59:37.333367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.335 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.335 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.336 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T22:59:37.338033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.339 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.339 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.340 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.341 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T22:59:37.342352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.344 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.344 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.344 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.345 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.345 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T22:59:37.347936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.348 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.348 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-158689313>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1960241782>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-2091090341>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-158689313>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1960241782>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-2091090341>]
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.349 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T22:59:37.350931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.352 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T22:59:37.353870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.356 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.356 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.incoming.packets volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.357 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T22:59:37.359368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.360 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T22:59:37.361649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.363 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.363 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.364 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.365 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T22:59:37.366134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.367 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.367 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.368 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.369 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T22:59:37.370107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.371 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.372 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.outgoing.bytes volume: 15902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.372 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.373 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T22:59:37.374478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.376 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.376 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.376 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.378 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.379 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T22:59:37.379169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.380 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.380 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/memory.usage volume: 42.31640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.381 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.381 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 4d450663-4303-4535-bc1a-72996000c25a: ceilometer.compute.pollsters.NoVolumeException
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.384 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T22:59:37.383669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.384 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-158689313>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1960241782>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-2091090341>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-158689313>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1960241782>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-2091090341>]
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.385 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T22:59:37.386112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402'
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.387 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-0000000c, id=d35b993a-ba2a-478d-b7f6-c7dfba36d402>: [Error Code 42] Domain not found: no domain with matching uuid 'd35b993a-ba2a-478d-b7f6-c7dfba36d402' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.388 14 DEBUG ceilometer.compute.pollsters [-] 6a2b0a2e-1144-4264-917f-086024e18bed/network.incoming.bytes volume: 20388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.388 14 DEBUG ceilometer.compute.pollsters [-] 4d450663-4303-4535-bc1a-72996000c25a/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 22:59:37.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.671 189512 DEBUG nova.network.neutron [-] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.689 189512 DEBUG nova.network.neutron [-] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.690 189512 INFO nova.compute.manager [-] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Took 1.06 seconds to deallocate network for instance.#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.718 189512 INFO nova.compute.manager [-] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Took 1.66 seconds to deallocate network for instance.#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.740 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.740 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.760 189512 DEBUG nova.compute.manager [req-bc503d78-c289-4c63-9099-00dc18579ec1 req-5528cf35-1f37-4680-bbb0-c727f2434870 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received event network-vif-deleted-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.766 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.911 189512 DEBUG nova.compute.provider_tree [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.928 189512 DEBUG nova.scheduler.client.report [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.963 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.965 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:37 compute-0 nova_compute[189508]: 2025-12-01 22:59:37.998 189512 INFO nova.scheduler.client.report [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Deleted allocations for instance d35b993a-ba2a-478d-b7f6-c7dfba36d402#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.106 189512 DEBUG oslo_concurrency.lockutils [None req-8b206b94-b07c-44f8-ad44-619740459463 376b22ff1d4b4216a3013dc170064403 5d415954cbc84272b9bc26d3d8a3a591 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.131 189512 DEBUG nova.compute.provider_tree [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.147 189512 DEBUG nova.scheduler.client.report [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.175 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.208 189512 DEBUG nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.209 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.209 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.209 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.210 189512 DEBUG nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] No waiting events found dispatching network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.210 189512 WARNING nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received unexpected event network-vif-plugged-92958b22-0bb7-41c6-9850-61c81cea56d8 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.211 189512 DEBUG nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received event network-vif-unplugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.211 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.212 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.212 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.213 189512 DEBUG nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] No waiting events found dispatching network-vif-unplugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.213 189512 WARNING nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received unexpected event network-vif-unplugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b for instance with vm_state deleted and task_state None.#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.214 189512 DEBUG nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received event network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.214 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.214 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.215 189512 DEBUG oslo_concurrency.lockutils [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "d35b993a-ba2a-478d-b7f6-c7dfba36d402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.215 189512 DEBUG nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] No waiting events found dispatching network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.216 189512 WARNING nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Received unexpected event network-vif-plugged-fdb7b491-6ff3-42d8-ba52-cdb8d280c17b for instance with vm_state deleted and task_state None.#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.216 189512 DEBUG nova.compute.manager [req-b2329a97-6a1d-4548-8345-5e9dffccf317 req-526e94d9-d633-486b-b30a-172d16fa568c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Received event network-vif-deleted-92958b22-0bb7-41c6-9850-61c81cea56d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.218 189512 INFO nova.scheduler.client.report [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Deleted allocations for instance a4f50c75-4c0a-4222-a614-20d83eba9a2f#033[00m
Dec  1 22:59:38 compute-0 nova_compute[189508]: 2025-12-01 22:59:38.315 189512 DEBUG oslo_concurrency.lockutils [None req-6f02af87-f88b-48cf-ba29-11f590731c5a 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "a4f50c75-4c0a-4222-a614-20d83eba9a2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.604 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.820 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.821 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.821 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.821 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.821 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.822 189512 INFO nova.compute.manager [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Terminating instance#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.823 189512 DEBUG nova.compute.manager [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 22:59:40 compute-0 kernel: tap02f1eac6-30 (unregistering): left promiscuous mode
Dec  1 22:59:40 compute-0 NetworkManager[56278]: <info>  [1764629980.8672] device (tap02f1eac6-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 22:59:40 compute-0 ovn_controller[97770]: 2025-12-01T22:59:40Z|00149|binding|INFO|Releasing lport 02f1eac6-306c-4fa9-82c7-6e9082828c65 from this chassis (sb_readonly=0)
Dec  1 22:59:40 compute-0 ovn_controller[97770]: 2025-12-01T22:59:40Z|00150|binding|INFO|Setting lport 02f1eac6-306c-4fa9-82c7-6e9082828c65 down in Southbound
Dec  1 22:59:40 compute-0 ovn_controller[97770]: 2025-12-01T22:59:40Z|00151|binding|INFO|Removing iface tap02f1eac6-30 ovn-installed in OVS
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.882 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:40.888 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9d:a6 10.100.0.10'], port_security=['fa:16:3e:67:9d:a6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6a2b0a2e-1144-4264-917f-086024e18bed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-513808ab-c863-4790-88e3-b64040a0ed8a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43a7ae6a25114fd199de68dfe3d3217b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8e736c0-3ac7-45a4-b71c-33bc93594c74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e643dba6-de01-4938-9750-33d8ce8dfa77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=02f1eac6-306c-4fa9-82c7-6e9082828c65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 22:59:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:40.891 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 02f1eac6-306c-4fa9-82c7-6e9082828c65 in datapath 513808ab-c863-4790-88e3-b64040a0ed8a unbound from our chassis#033[00m
Dec  1 22:59:40 compute-0 nova_compute[189508]: 2025-12-01 22:59:40.892 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:40.892 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 513808ab-c863-4790-88e3-b64040a0ed8a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 22:59:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:40.894 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[69b824cb-02a0-4ed6-bff1-c70b689cccff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:40.895 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a namespace which is not needed anymore#033[00m
Dec  1 22:59:40 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  1 22:59:40 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 45.266s CPU time.
Dec  1 22:59:40 compute-0 systemd-machined[155759]: Machine qemu-10-instance-0000000a terminated.
Dec  1 22:59:41 compute-0 neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a[252643]: [NOTICE]   (252647) : haproxy version is 2.8.14-c23fe91
Dec  1 22:59:41 compute-0 neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a[252643]: [NOTICE]   (252647) : path to executable is /usr/sbin/haproxy
Dec  1 22:59:41 compute-0 neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a[252643]: [WARNING]  (252647) : Exiting Master process...
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.048 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:41 compute-0 neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a[252643]: [ALERT]    (252647) : Current worker (252649) exited with code 143 (Terminated)
Dec  1 22:59:41 compute-0 neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a[252643]: [WARNING]  (252647) : All workers exited. Exiting... (0)
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.055 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:41 compute-0 systemd[1]: libpod-38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b.scope: Deactivated successfully.
Dec  1 22:59:41 compute-0 podman[254156]: 2025-12-01 22:59:41.0605746 +0000 UTC m=+0.060322332 container died 38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 22:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b-userdata-shm.mount: Deactivated successfully.
Dec  1 22:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aca7c958599cc980fc6c70c600d7cad8601c121aa41fd579c74787664142bab-merged.mount: Deactivated successfully.
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.113 189512 INFO nova.virt.libvirt.driver [-] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Instance destroyed successfully.#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.114 189512 DEBUG nova.objects.instance [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lazy-loading 'resources' on Instance uuid 6a2b0a2e-1144-4264-917f-086024e18bed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 22:59:41 compute-0 podman[254156]: 2025-12-01 22:59:41.124048629 +0000 UTC m=+0.123796361 container cleanup 38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 22:59:41 compute-0 systemd[1]: libpod-conmon-38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b.scope: Deactivated successfully.
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.171 189512 DEBUG nova.virt.libvirt.vif [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:57:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960241782',display_name='tempest-TestNetworkBasicOps-server-1960241782',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960241782',id=10,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhExVoUayFMe+jrrrTAUsXIJCndRWHxq1SKk64GclRI1Ri0NLopX756w2GxPIq7V/BCaKXA48bYoWHaVL6kcj1zZ+n+zH01SVT7NBtNAfvGLVXZdp1srCd+VlTCV1sUJw==',key_name='tempest-TestNetworkBasicOps-894511931',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:57:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43a7ae6a25114fd199de68dfe3d3217b',ramdisk_id='',reservation_id='r-0jnsvsjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1418827846',owner_user_name='tempest-TestNetworkBasicOps-1418827846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:57:55Z,user_data=None,user_id='786ce878f1d2401ab2375f67e5ebd78b',uuid=6a2b0a2e-1144-4264-917f-086024e18bed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.171 189512 DEBUG nova.network.os_vif_util [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converting VIF {"id": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "address": "fa:16:3e:67:9d:a6", "network": {"id": "513808ab-c863-4790-88e3-b64040a0ed8a", "bridge": "br-int", "label": "tempest-network-smoke--740211687", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43a7ae6a25114fd199de68dfe3d3217b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02f1eac6-30", "ovs_interfaceid": "02f1eac6-306c-4fa9-82c7-6e9082828c65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.172 189512 DEBUG nova.network.os_vif_util [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:9d:a6,bridge_name='br-int',has_traffic_filtering=True,id=02f1eac6-306c-4fa9-82c7-6e9082828c65,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f1eac6-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.173 189512 DEBUG os_vif [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:9d:a6,bridge_name='br-int',has_traffic_filtering=True,id=02f1eac6-306c-4fa9-82c7-6e9082828c65,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f1eac6-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.174 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.174 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02f1eac6-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.179 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.182 189512 INFO os_vif [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:9d:a6,bridge_name='br-int',has_traffic_filtering=True,id=02f1eac6-306c-4fa9-82c7-6e9082828c65,network=Network(513808ab-c863-4790-88e3-b64040a0ed8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02f1eac6-30')#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.183 189512 INFO nova.virt.libvirt.driver [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Deleting instance files /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed_del#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.183 189512 INFO nova.virt.libvirt.driver [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Deletion of /var/lib/nova/instances/6a2b0a2e-1144-4264-917f-086024e18bed_del complete#033[00m
Dec  1 22:59:41 compute-0 podman[254200]: 2025-12-01 22:59:41.233354389 +0000 UTC m=+0.073820234 container remove 38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.242 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[cb46e6ab-7742-4760-a39c-7be960754346]: (4, ('Mon Dec  1 10:59:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a (38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b)\n38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b\nMon Dec  1 10:59:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a (38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b)\n38ddc6965d204bf69ec6037f29faba6d00a7d07659e28438a186bd3cbf97e75b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.245 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[14749fb8-c7e6-4ded-8e19-9a328db97bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.246 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513808ab-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 22:59:41 compute-0 kernel: tap513808ab-c0: left promiscuous mode
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.250 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.259 189512 INFO nova.compute.manager [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.260 189512 DEBUG oslo.service.loopingcall [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.260 189512 DEBUG nova.compute.manager [-] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.260 189512 DEBUG nova.network.neutron [-] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.265 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[aa41cbaa-f98c-4c2f-b4f7-bb850b367ea3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.266 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.283 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ff645a10-186f-41da-9674-2f8c5f9e22f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.285 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[7db8ddbe-052b-41be-9535-1ef6f24092af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.304 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[6e21e628-41c3-47a6-8a5a-d862ac83aca5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537633, 'reachable_time': 24296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254214, 'error': None, 'target': 'ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d513808ab\x2dc863\x2d4790\x2d88e3\x2db64040a0ed8a.mount: Deactivated successfully.
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.309 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-513808ab-c863-4790-88e3-b64040a0ed8a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 22:59:41 compute-0 ovn_metadata_agent[106657]: 2025-12-01 22:59:41.309 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[af9b014d-49e1-4ead-86d1-29c36dd44caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.524 189512 DEBUG nova.compute.manager [req-0ee7dfd6-39b2-42f8-8df8-08937281c5fe req-a3983eb6-04a5-4917-9443-f8559e0be057 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-vif-unplugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.524 189512 DEBUG oslo_concurrency.lockutils [req-0ee7dfd6-39b2-42f8-8df8-08937281c5fe req-a3983eb6-04a5-4917-9443-f8559e0be057 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.525 189512 DEBUG oslo_concurrency.lockutils [req-0ee7dfd6-39b2-42f8-8df8-08937281c5fe req-a3983eb6-04a5-4917-9443-f8559e0be057 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.525 189512 DEBUG oslo_concurrency.lockutils [req-0ee7dfd6-39b2-42f8-8df8-08937281c5fe req-a3983eb6-04a5-4917-9443-f8559e0be057 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.525 189512 DEBUG nova.compute.manager [req-0ee7dfd6-39b2-42f8-8df8-08937281c5fe req-a3983eb6-04a5-4917-9443-f8559e0be057 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] No waiting events found dispatching network-vif-unplugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.526 189512 DEBUG nova.compute.manager [req-0ee7dfd6-39b2-42f8-8df8-08937281c5fe req-a3983eb6-04a5-4917-9443-f8559e0be057 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-vif-unplugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 22:59:41 compute-0 nova_compute[189508]: 2025-12-01 22:59:41.993 189512 DEBUG nova.network.neutron [-] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.011 189512 INFO nova.compute.manager [-] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Took 0.75 seconds to deallocate network for instance.#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.057 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.057 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.094 189512 DEBUG nova.compute.manager [req-15c0d4c6-29f6-4924-8a67-c39e35a237f4 req-7304f3b7-abe7-48e0-821d-7d3a3dcab3b3 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-vif-deleted-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.135 189512 DEBUG nova.compute.provider_tree [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.153 189512 DEBUG nova.scheduler.client.report [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.174 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.198 189512 INFO nova.scheduler.client.report [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Deleted allocations for instance 6a2b0a2e-1144-4264-917f-086024e18bed#033[00m
Dec  1 22:59:42 compute-0 nova_compute[189508]: 2025-12-01 22:59:42.276 189512 DEBUG oslo_concurrency.lockutils [None req-4a9fc475-f438-4bef-b087-84c3ce3de0d7 786ce878f1d2401ab2375f67e5ebd78b 43a7ae6a25114fd199de68dfe3d3217b - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:42 compute-0 podman[254216]: 2025-12-01 22:59:42.804991972 +0000 UTC m=+0.078225909 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 22:59:42 compute-0 podman[254215]: 2025-12-01 22:59:42.858716345 +0000 UTC m=+0.139146126 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 22:59:43 compute-0 nova_compute[189508]: 2025-12-01 22:59:43.639 189512 DEBUG nova.compute.manager [req-f42671b8-dfd7-448c-aafb-e03d7cbc6f9d req-0266fd9b-5f22-40b8-8bf0-a1661e02820e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received event network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 22:59:43 compute-0 nova_compute[189508]: 2025-12-01 22:59:43.640 189512 DEBUG oslo_concurrency.lockutils [req-f42671b8-dfd7-448c-aafb-e03d7cbc6f9d req-0266fd9b-5f22-40b8-8bf0-a1661e02820e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 22:59:43 compute-0 nova_compute[189508]: 2025-12-01 22:59:43.640 189512 DEBUG oslo_concurrency.lockutils [req-f42671b8-dfd7-448c-aafb-e03d7cbc6f9d req-0266fd9b-5f22-40b8-8bf0-a1661e02820e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 22:59:43 compute-0 nova_compute[189508]: 2025-12-01 22:59:43.641 189512 DEBUG oslo_concurrency.lockutils [req-f42671b8-dfd7-448c-aafb-e03d7cbc6f9d req-0266fd9b-5f22-40b8-8bf0-a1661e02820e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "6a2b0a2e-1144-4264-917f-086024e18bed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 22:59:43 compute-0 nova_compute[189508]: 2025-12-01 22:59:43.641 189512 DEBUG nova.compute.manager [req-f42671b8-dfd7-448c-aafb-e03d7cbc6f9d req-0266fd9b-5f22-40b8-8bf0-a1661e02820e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] No waiting events found dispatching network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 22:59:43 compute-0 nova_compute[189508]: 2025-12-01 22:59:43.642 189512 WARNING nova.compute.manager [req-f42671b8-dfd7-448c-aafb-e03d7cbc6f9d req-0266fd9b-5f22-40b8-8bf0-a1661e02820e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Received unexpected event network-vif-plugged-02f1eac6-306c-4fa9-82c7-6e9082828c65 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 22:59:43 compute-0 ovn_controller[97770]: 2025-12-01T22:59:43Z|00152|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:59:43 compute-0 nova_compute[189508]: 2025-12-01 22:59:43.904 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:45 compute-0 nova_compute[189508]: 2025-12-01 22:59:45.607 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:45 compute-0 ovn_controller[97770]: 2025-12-01T22:59:45Z|00153|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:59:45 compute-0 nova_compute[189508]: 2025-12-01 22:59:45.872 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:46 compute-0 nova_compute[189508]: 2025-12-01 22:59:46.178 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:46 compute-0 nova_compute[189508]: 2025-12-01 22:59:46.711 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:47 compute-0 ovn_controller[97770]: 2025-12-01T22:59:47Z|00154|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 22:59:47 compute-0 nova_compute[189508]: 2025-12-01 22:59:47.436 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:47 compute-0 podman[254260]: 2025-12-01 22:59:47.806619573 +0000 UTC m=+0.076022017 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Dec  1 22:59:47 compute-0 podman[254264]: 2025-12-01 22:59:47.816980497 +0000 UTC m=+0.082383477 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64)
Dec  1 22:59:47 compute-0 podman[254259]: 2025-12-01 22:59:47.821012251 +0000 UTC m=+0.086771341 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 22:59:47 compute-0 podman[254258]: 2025-12-01 22:59:47.83226181 +0000 UTC m=+0.107080397 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 22:59:50 compute-0 nova_compute[189508]: 2025-12-01 22:59:50.611 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:50 compute-0 nova_compute[189508]: 2025-12-01 22:59:50.916 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629975.9151216, a4f50c75-4c0a-4222-a614-20d83eba9a2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:59:50 compute-0 nova_compute[189508]: 2025-12-01 22:59:50.917 189512 INFO nova.compute.manager [-] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:59:50 compute-0 nova_compute[189508]: 2025-12-01 22:59:50.941 189512 DEBUG nova.compute.manager [None req-996158ad-788b-4e69-8c76-cf84fe92540b - - - - - -] [instance: a4f50c75-4c0a-4222-a614-20d83eba9a2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:59:51 compute-0 nova_compute[189508]: 2025-12-01 22:59:51.181 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:51 compute-0 nova_compute[189508]: 2025-12-01 22:59:51.545 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629976.5436869, d35b993a-ba2a-478d-b7f6-c7dfba36d402 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:59:51 compute-0 nova_compute[189508]: 2025-12-01 22:59:51.546 189512 INFO nova.compute.manager [-] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:59:51 compute-0 nova_compute[189508]: 2025-12-01 22:59:51.579 189512 DEBUG nova.compute.manager [None req-96787553-5459-4864-b916-0c2847a37ed3 - - - - - -] [instance: d35b993a-ba2a-478d-b7f6-c7dfba36d402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:59:55 compute-0 nova_compute[189508]: 2025-12-01 22:59:55.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:55 compute-0 nova_compute[189508]: 2025-12-01 22:59:55.605 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:55 compute-0 nova_compute[189508]: 2025-12-01 22:59:55.613 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:56 compute-0 nova_compute[189508]: 2025-12-01 22:59:56.111 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764629981.1100442, 6a2b0a2e-1144-4264-917f-086024e18bed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 22:59:56 compute-0 nova_compute[189508]: 2025-12-01 22:59:56.112 189512 INFO nova.compute.manager [-] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] VM Stopped (Lifecycle Event)#033[00m
Dec  1 22:59:56 compute-0 nova_compute[189508]: 2025-12-01 22:59:56.133 189512 DEBUG nova.compute.manager [None req-d2c43bdf-40ae-4ccc-bf29-6e4e7504c3e1 - - - - - -] [instance: 6a2b0a2e-1144-4264-917f-086024e18bed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 22:59:56 compute-0 nova_compute[189508]: 2025-12-01 22:59:56.184 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:56 compute-0 ovn_controller[97770]: 2025-12-01T22:59:56Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:3e:a0 10.100.0.6
Dec  1 22:59:57 compute-0 nova_compute[189508]: 2025-12-01 22:59:57.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:57 compute-0 nova_compute[189508]: 2025-12-01 22:59:57.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:57 compute-0 nova_compute[189508]: 2025-12-01 22:59:57.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:57 compute-0 nova_compute[189508]: 2025-12-01 22:59:57.883 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 22:59:58 compute-0 nova_compute[189508]: 2025-12-01 22:59:58.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 22:59:58 compute-0 nova_compute[189508]: 2025-12-01 22:59:58.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 22:59:58 compute-0 nova_compute[189508]: 2025-12-01 22:59:58.546 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 22:59:58 compute-0 nova_compute[189508]: 2025-12-01 22:59:58.547 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 22:59:58 compute-0 nova_compute[189508]: 2025-12-01 22:59:58.547 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 22:59:59 compute-0 podman[203693]: time="2025-12-01T22:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 22:59:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 22:59:59 compute-0 podman[203693]: @ - - [01/Dec/2025:22:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 22:59:59 compute-0 nova_compute[189508]: 2025-12-01 22:59:59.980 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:00 compute-0 nova_compute[189508]: 2025-12-01 23:00:00.616 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.188 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.348 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updating instance_info_cache with network_info: [{"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.366 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-4d450663-4303-4535-bc1a-72996000c25a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.366 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.367 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.367 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.368 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.368 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.396 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.397 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.397 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.397 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:00:01 compute-0 openstack_network_exporter[205887]: ERROR   23:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:00:01 compute-0 openstack_network_exporter[205887]: ERROR   23:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:00:01 compute-0 openstack_network_exporter[205887]: ERROR   23:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:00:01 compute-0 openstack_network_exporter[205887]: ERROR   23:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:00:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:00:01 compute-0 openstack_network_exporter[205887]: ERROR   23:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:00:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.490 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.553 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.555 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:01 compute-0 nova_compute[189508]: 2025-12-01 23:00:01.646 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.082 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.083 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=72.12897872924805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.083 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.083 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.160 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 4d450663-4303-4535-bc1a-72996000c25a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.161 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.161 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.221 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.224 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.239 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.265 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:00:02 compute-0 nova_compute[189508]: 2025-12-01 23:00:02.266 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:02 compute-0 podman[254354]: 2025-12-01 23:00:02.854382589 +0000 UTC m=+0.115254349 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:00:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:04.642 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:04.642 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:04.643 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:04 compute-0 nova_compute[189508]: 2025-12-01 23:00:04.852 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:05 compute-0 nova_compute[189508]: 2025-12-01 23:00:05.619 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:05 compute-0 podman[254375]: 2025-12-01 23:00:05.814130171 +0000 UTC m=+0.095528459 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 23:00:06 compute-0 nova_compute[189508]: 2025-12-01 23:00:06.190 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:06 compute-0 podman[254396]: 2025-12-01 23:00:06.858340599 +0000 UTC m=+0.125874030 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4)
Dec  1 23:00:08 compute-0 nova_compute[189508]: 2025-12-01 23:00:08.701 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:09 compute-0 ovn_controller[97770]: 2025-12-01T23:00:09Z|00155|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 23:00:09 compute-0 nova_compute[189508]: 2025-12-01 23:00:09.286 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:10 compute-0 nova_compute[189508]: 2025-12-01 23:00:10.622 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:11 compute-0 nova_compute[189508]: 2025-12-01 23:00:11.194 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:13 compute-0 podman[254418]: 2025-12-01 23:00:13.035487111 +0000 UTC m=+0.109062814 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 23:00:13 compute-0 podman[254417]: 2025-12-01 23:00:13.086088616 +0000 UTC m=+0.151987251 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 23:00:15 compute-0 nova_compute[189508]: 2025-12-01 23:00:15.625 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:16 compute-0 nova_compute[189508]: 2025-12-01 23:00:16.198 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:16 compute-0 nova_compute[189508]: 2025-12-01 23:00:16.878 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:18 compute-0 podman[254459]: 2025-12-01 23:00:18.832796511 +0000 UTC m=+0.110343790 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:00:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:18.833 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:00:18 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:18.835 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 23:00:18 compute-0 nova_compute[189508]: 2025-12-01 23:00:18.839 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:18 compute-0 podman[254460]: 2025-12-01 23:00:18.842619779 +0000 UTC m=+0.100357126 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 23:00:18 compute-0 podman[254461]: 2025-12-01 23:00:18.843570656 +0000 UTC m=+0.108942800 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41)
Dec  1 23:00:18 compute-0 podman[254468]: 2025-12-01 23:00:18.844181704 +0000 UTC m=+0.100492291 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, name=ubi9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64)
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.457 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.458 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.475 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.571 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.572 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.582 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.583 189512 INFO nova.compute.claims [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.727 189512 DEBUG nova.compute.provider_tree [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.734 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.749 189512 DEBUG nova.scheduler.client.report [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.775 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.776 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.824 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.825 189512 DEBUG nova.network.neutron [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 23:00:19 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:19.837 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.846 189512 INFO nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 23:00:19 compute-0 nova_compute[189508]: 2025-12-01 23:00:19.868 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.029 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.031 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.031 189512 INFO nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Creating image(s)#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.032 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "/var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.033 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "/var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.033 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "/var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.034 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.034 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.129 189512 DEBUG nova.policy [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.629 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:20 compute-0 nova_compute[189508]: 2025-12-01 23:00:20.921 189512 DEBUG nova.network.neutron [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Successfully created port: 0eb5530e-04fb-4ba5-821f-1494d355dfa5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.182 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.210 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.280 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.part --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.281 189512 DEBUG nova.virt.images [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.283 189512 DEBUG nova.privsep.utils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.284 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.part /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.565 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.part /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.converted" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.571 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.620 189512 DEBUG nova.network.neutron [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Successfully updated port: 0eb5530e-04fb-4ba5-821f-1494d355dfa5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.639 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.639 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.639 189512 DEBUG nova.network.neutron [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.646 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.647 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.669 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.726 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.727 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.728 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.744 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.769 189512 DEBUG nova.compute.manager [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received event network-changed-0eb5530e-04fb-4ba5-821f-1494d355dfa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.769 189512 DEBUG nova.compute.manager [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Refreshing instance network info cache due to event network-changed-0eb5530e-04fb-4ba5-821f-1494d355dfa5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.770 189512 DEBUG oslo_concurrency.lockutils [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.800 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.801 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa,backing_fmt=raw /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.840 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa,backing_fmt=raw /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.841 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.842 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.882 189512 DEBUG nova.network.neutron [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.903 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.903 189512 DEBUG nova.virt.disk.api [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Checking if we can resize image /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.904 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.968 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.969 189512 DEBUG nova.virt.disk.api [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Cannot resize image /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.970 189512 DEBUG nova.objects.instance [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lazy-loading 'migration_context' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.985 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.985 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Ensure instance console log exists: /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.986 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.986 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:21 compute-0 nova_compute[189508]: 2025-12-01 23:00:21.986 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.738 189512 DEBUG nova.network.neutron [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.758 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.758 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Instance network_info: |[{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.759 189512 DEBUG oslo_concurrency.lockutils [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.759 189512 DEBUG nova.network.neutron [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Refreshing network info cache for port 0eb5530e-04fb-4ba5-821f-1494d355dfa5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.762 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Start _get_guest_xml network_info=[{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T23:00:11Z,direct_url=<?>,disk_format='qcow2',id=ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793,min_disk=0,min_ram=0,name='tempest-scenario-img--67714485',owner='a0bc498794944fb4bfd74d85d99d70b2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T23:00:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.784 189512 WARNING nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.794 189512 DEBUG nova.virt.libvirt.host [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.795 189512 DEBUG nova.virt.libvirt.host [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.804 189512 DEBUG nova.virt.libvirt.host [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.804 189512 DEBUG nova.virt.libvirt.host [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.805 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.805 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T23:00:11Z,direct_url=<?>,disk_format='qcow2',id=ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793,min_disk=0,min_ram=0,name='tempest-scenario-img--67714485',owner='a0bc498794944fb4bfd74d85d99d70b2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T23:00:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.806 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.806 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.806 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.806 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.807 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.807 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.807 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.808 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.808 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.808 189512 DEBUG nova.virt.hardware [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.812 189512 DEBUG nova.virt.libvirt.vif [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T23:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh',id=14,image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='3dac0f46-9f79-460b-b6c5-9876493d569a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0bc498794944fb4bfd74d85d99d70b2',ramdisk_id='',reservation_id='r-oyeail70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2049243380',owner_user_name='tempest-PrometheusGabbiTest-2049243380-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T23:00:19Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='31117d25a4e94964a6d197de21b13cbe',uuid=91dfa889-2ab6-4683-bc07-870d2df30bdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.812 189512 DEBUG nova.network.os_vif_util [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converting VIF {"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.813 189512 DEBUG nova.network.os_vif_util [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:86:00,bridge_name='br-int',has_traffic_filtering=True,id=0eb5530e-04fb-4ba5-821f-1494d355dfa5,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eb5530e-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.814 189512 DEBUG nova.objects.instance [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.828 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] End _get_guest_xml xml=<domain type="kvm">
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <uuid>91dfa889-2ab6-4683-bc07-870d2df30bdd</uuid>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <name>instance-0000000e</name>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <metadata>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <nova:name>te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh</nova:name>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 23:00:22</nova:creationTime>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:user uuid="31117d25a4e94964a6d197de21b13cbe">tempest-PrometheusGabbiTest-2049243380-project-member</nova:user>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:project uuid="a0bc498794944fb4bfd74d85d99d70b2">tempest-PrometheusGabbiTest-2049243380</nova:project>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        <nova:port uuid="0eb5530e-04fb-4ba5-821f-1494d355dfa5">
Dec  1 23:00:22 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.2.225" ipVersion="4"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  </metadata>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <system>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <entry name="serial">91dfa889-2ab6-4683-bc07-870d2df30bdd</entry>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <entry name="uuid">91dfa889-2ab6-4683-bc07-870d2df30bdd</entry>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </system>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <os>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  </os>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <features>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <apic/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  </features>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  </clock>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  </cpu>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  <devices>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </disk>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.config"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </disk>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:c3:86:00"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <target dev="tap0eb5530e-04"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </interface>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/console.log" append="off"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </serial>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <video>
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </video>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </rng>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 23:00:22 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 23:00:22 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 23:00:22 compute-0 nova_compute[189508]:  </devices>
Dec  1 23:00:22 compute-0 nova_compute[189508]: </domain>
Dec  1 23:00:22 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.829 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Preparing to wait for external event network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.830 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.831 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.831 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.833 189512 DEBUG nova.virt.libvirt.vif [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T23:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh',id=14,image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='3dac0f46-9f79-460b-b6c5-9876493d569a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0bc498794944fb4bfd74d85d99d70b2',ramdisk_id='',reservation_id='r-oyeail70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2049243380',owner_user_name='tempest-PrometheusGabbiTest-2049243380-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T23:00:19Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='31117d25a4e94964a6d197de21b13cbe',uuid=91dfa889-2ab6-4683-bc07-870d2df30bdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.833 189512 DEBUG nova.network.os_vif_util [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converting VIF {"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.835 189512 DEBUG nova.network.os_vif_util [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:86:00,bridge_name='br-int',has_traffic_filtering=True,id=0eb5530e-04fb-4ba5-821f-1494d355dfa5,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eb5530e-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.835 189512 DEBUG os_vif [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:86:00,bridge_name='br-int',has_traffic_filtering=True,id=0eb5530e-04fb-4ba5-821f-1494d355dfa5,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eb5530e-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.836 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.837 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.838 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.842 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.843 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0eb5530e-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.844 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0eb5530e-04, col_values=(('external_ids', {'iface-id': '0eb5530e-04fb-4ba5-821f-1494d355dfa5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:86:00', 'vm-uuid': '91dfa889-2ab6-4683-bc07-870d2df30bdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.847 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:22 compute-0 NetworkManager[56278]: <info>  [1764630022.8487] manager: (tap0eb5530e-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.851 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.856 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.857 189512 INFO os_vif [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:86:00,bridge_name='br-int',has_traffic_filtering=True,id=0eb5530e-04fb-4ba5-821f-1494d355dfa5,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eb5530e-04')#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.980 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.981 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.982 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] No VIF found with MAC fa:16:3e:c3:86:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 23:00:22 compute-0 nova_compute[189508]: 2025-12-01 23:00:22.983 189512 INFO nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Using config drive#033[00m
Dec  1 23:00:23 compute-0 ovn_controller[97770]: 2025-12-01T23:00:23Z|00156|binding|INFO|Releasing lport 59cd1803-8a52-4381-bb39-d2aa1220acc5 from this chassis (sb_readonly=0)
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.127 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.392 189512 INFO nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Creating config drive at /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.config#033[00m
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.405 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7iyrxe6e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.531 189512 DEBUG oslo_concurrency.processutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7iyrxe6e" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:00:23 compute-0 kernel: tap0eb5530e-04: entered promiscuous mode
Dec  1 23:00:23 compute-0 NetworkManager[56278]: <info>  [1764630023.6228] manager: (tap0eb5530e-04): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.625 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:23 compute-0 ovn_controller[97770]: 2025-12-01T23:00:23Z|00157|binding|INFO|Claiming lport 0eb5530e-04fb-4ba5-821f-1494d355dfa5 for this chassis.
Dec  1 23:00:23 compute-0 ovn_controller[97770]: 2025-12-01T23:00:23Z|00158|binding|INFO|0eb5530e-04fb-4ba5-821f-1494d355dfa5: Claiming fa:16:3e:c3:86:00 10.100.2.225
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.635 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:86:00 10.100.2.225'], port_security=['fa:16:3e:c3:86:00 10.100.2.225'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.225/16', 'neutron:device_id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76005ead-26ac-4245-b45f-b052ffa2d506', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1db1c83-5a48-462b-b1b5-4f849ee50fec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39384b3e-eb99-4e89-ab68-0d8f0f8766e1, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=0eb5530e-04fb-4ba5-821f-1494d355dfa5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.638 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 0eb5530e-04fb-4ba5-821f-1494d355dfa5 in datapath 76005ead-26ac-4245-b45f-b052ffa2d506 bound to our chassis#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.641 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76005ead-26ac-4245-b45f-b052ffa2d506#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.656 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[05f1b761-e5b5-4818-8ad0-0d5ce94506ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.657 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76005ead-21 in ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.661 239973 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76005ead-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.661 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[2be14b77-867a-4c1a-84ba-ad6b8bd92c25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.663 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[6958e3a0-aa8b-4fa3-95bc-91c5c6678aab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 systemd-udevd[254584]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.681 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[1caece72-bde8-4f09-82e9-be004b656a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.690 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:23 compute-0 ovn_controller[97770]: 2025-12-01T23:00:23Z|00159|binding|INFO|Setting lport 0eb5530e-04fb-4ba5-821f-1494d355dfa5 ovn-installed in OVS
Dec  1 23:00:23 compute-0 ovn_controller[97770]: 2025-12-01T23:00:23Z|00160|binding|INFO|Setting lport 0eb5530e-04fb-4ba5-821f-1494d355dfa5 up in Southbound
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.693 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:23 compute-0 systemd-machined[155759]: New machine qemu-15-instance-0000000e.
Dec  1 23:00:23 compute-0 NetworkManager[56278]: <info>  [1764630023.7093] device (tap0eb5530e-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 23:00:23 compute-0 NetworkManager[56278]: <info>  [1764630023.7106] device (tap0eb5530e-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.712 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[93c2e60d-84bf-42e5-9891-ed8bc8190e55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.750 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[371a9674-40f0-4951-8c71-2e353c45b2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.768 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7540d4-2527-4078-9031-3225479c72c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 NetworkManager[56278]: <info>  [1764630023.7699] manager: (tap76005ead-20): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.803 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[25a4f4dc-adac-4948-a977-df4743f7479c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.806 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[2a946248-b155-45d1-8288-05ea68592be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 NetworkManager[56278]: <info>  [1764630023.8280] device (tap76005ead-20): carrier: link connected
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.832 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb1878f-f4e0-4c62-9b27-831d06c85757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.853 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[de5d51f1-e1b7-45af-9aeb-edf2ebf72889]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76005ead-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:7d:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553339, 'reachable_time': 33436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254621, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.868 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5954da-056f-4db3-a4f4-397e372831ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:7d22'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553339, 'tstamp': 553339}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254622, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.895 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[669968fc-9264-4229-88ae-6ded1b00d46c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76005ead-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:7d:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553339, 'reachable_time': 33436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254623, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.927 189512 DEBUG nova.network.neutron [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated VIF entry in instance network info cache for port 0eb5530e-04fb-4ba5-821f-1494d355dfa5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.928 189512 DEBUG nova.network.neutron [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.933 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[944d0f94-19d4-43ba-9047-2a6509a089ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:23 compute-0 nova_compute[189508]: 2025-12-01 23:00:23.949 189512 DEBUG oslo_concurrency.lockutils [req-6f0bc211-d66c-4c05-9081-57684b472cde req-c3d06150-caad-41d1-80a4-901123c2dea1 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:00:23 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.998 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[01d75b9e-7408-41ba-ba9d-1f66de2f8fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:23.999 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76005ead-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:24.000 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:24.000 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76005ead-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.002 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:24 compute-0 kernel: tap76005ead-20: entered promiscuous mode
Dec  1 23:00:24 compute-0 NetworkManager[56278]: <info>  [1764630024.0032] manager: (tap76005ead-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.006 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:24.006 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76005ead-20, col_values=(('external_ids', {'iface-id': '6cd00ec7-5de6-4094-b01c-8ff2beea0431'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:24 compute-0 ovn_controller[97770]: 2025-12-01T23:00:24Z|00161|binding|INFO|Releasing lport 6cd00ec7-5de6-4094-b01c-8ff2beea0431 from this chassis (sb_readonly=0)
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:24.033 106662 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76005ead-26ac-4245-b45f-b052ffa2d506.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76005ead-26ac-4245-b45f-b052ffa2d506.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.033 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:24.034 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[31beb507-3009-4c86-86e4-227f57773d2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:24.035 106662 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: global
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    log         /dev/log local0 debug
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    log-tag     haproxy-metadata-proxy-76005ead-26ac-4245-b45f-b052ffa2d506
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    user        root
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    group       root
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    maxconn     1024
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    pidfile     /var/lib/neutron/external/pids/76005ead-26ac-4245-b45f-b052ffa2d506.pid.haproxy
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    daemon
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: defaults
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    log global
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    mode http
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    option httplog
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    option dontlognull
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    option http-server-close
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    option forwardfor
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    retries                 3
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    timeout http-request    30s
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    timeout connect         30s
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    timeout client          32s
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    timeout server          32s
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    timeout http-keep-alive 30s
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: listen listener
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    bind 169.254.169.254:80
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]:    http-request add-header X-OVN-Network-ID 76005ead-26ac-4245-b45f-b052ffa2d506
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 23:00:24 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:24.035 106662 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'env', 'PROCESS_TAG=haproxy-76005ead-26ac-4245-b45f-b052ffa2d506', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76005ead-26ac-4245-b45f-b052ffa2d506.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.284 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764630024.283464, 91dfa889-2ab6-4683-bc07-870d2df30bdd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.284 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] VM Started (Lifecycle Event)#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.306 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.313 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764630024.2836025, 91dfa889-2ab6-4683-bc07-870d2df30bdd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.314 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] VM Paused (Lifecycle Event)#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.332 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.339 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 23:00:24 compute-0 nova_compute[189508]: 2025-12-01 23:00:24.361 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 23:00:24 compute-0 podman[254661]: 2025-12-01 23:00:24.533275116 +0000 UTC m=+0.076020557 container create 022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 23:00:24 compute-0 systemd[1]: Started libpod-conmon-022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f.scope.
Dec  1 23:00:24 compute-0 podman[254661]: 2025-12-01 23:00:24.497140451 +0000 UTC m=+0.039885952 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 23:00:24 compute-0 systemd[1]: Started libcrun container.
Dec  1 23:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd82104e90355b90eb760d8aceb7adf586baf4e6b9f39a20907ba78525fa25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 23:00:24 compute-0 podman[254661]: 2025-12-01 23:00:24.666518403 +0000 UTC m=+0.209263904 container init 022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:00:24 compute-0 podman[254661]: 2025-12-01 23:00:24.682548698 +0000 UTC m=+0.225294139 container start 022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  1 23:00:24 compute-0 neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506[254675]: [NOTICE]   (254679) : New worker (254681) forked
Dec  1 23:00:24 compute-0 neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506[254675]: [NOTICE]   (254679) : Loading success.
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.547 189512 DEBUG nova.compute.manager [req-7bb0acf5-e7b7-4a74-80f7-99ea8c9d0701 req-229e82bf-3efd-4156-8242-7871dd3def02 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received event network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.547 189512 DEBUG oslo_concurrency.lockutils [req-7bb0acf5-e7b7-4a74-80f7-99ea8c9d0701 req-229e82bf-3efd-4156-8242-7871dd3def02 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.548 189512 DEBUG oslo_concurrency.lockutils [req-7bb0acf5-e7b7-4a74-80f7-99ea8c9d0701 req-229e82bf-3efd-4156-8242-7871dd3def02 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.549 189512 DEBUG oslo_concurrency.lockutils [req-7bb0acf5-e7b7-4a74-80f7-99ea8c9d0701 req-229e82bf-3efd-4156-8242-7871dd3def02 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.549 189512 DEBUG nova.compute.manager [req-7bb0acf5-e7b7-4a74-80f7-99ea8c9d0701 req-229e82bf-3efd-4156-8242-7871dd3def02 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Processing event network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.550 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.557 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.559 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764630025.5588412, 91dfa889-2ab6-4683-bc07-870d2df30bdd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.559 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] VM Resumed (Lifecycle Event)#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.568 189512 INFO nova.virt.libvirt.driver [-] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Instance spawned successfully.#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.569 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.589 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.604 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.613 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.614 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.615 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.616 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.617 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.618 189512 DEBUG nova.virt.libvirt.driver [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.626 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.632 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.675 189512 INFO nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Took 5.64 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.675 189512 DEBUG nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.776 189512 INFO nova.compute.manager [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Took 6.24 seconds to build instance.#033[00m
Dec  1 23:00:25 compute-0 nova_compute[189508]: 2025-12-01 23:00:25.796 189512 DEBUG oslo_concurrency.lockutils [None req-31658704-378d-4d6b-8325-2fb5241e8d85 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.672 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.673 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.674 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.675 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.676 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.678 189512 INFO nova.compute.manager [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Terminating instance#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.680 189512 DEBUG nova.compute.manager [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 23:00:27 compute-0 kernel: tapa139ed27-b7 (unregistering): left promiscuous mode
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.741 189512 DEBUG nova.compute.manager [req-cc1b916e-988e-436f-9d88-ed461c0c56c5 req-2c6cda53-d1eb-4c1f-9d63-7291c5527422 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received event network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.742 189512 DEBUG oslo_concurrency.lockutils [req-cc1b916e-988e-436f-9d88-ed461c0c56c5 req-2c6cda53-d1eb-4c1f-9d63-7291c5527422 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.743 189512 DEBUG oslo_concurrency.lockutils [req-cc1b916e-988e-436f-9d88-ed461c0c56c5 req-2c6cda53-d1eb-4c1f-9d63-7291c5527422 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.743 189512 DEBUG oslo_concurrency.lockutils [req-cc1b916e-988e-436f-9d88-ed461c0c56c5 req-2c6cda53-d1eb-4c1f-9d63-7291c5527422 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.744 189512 DEBUG nova.compute.manager [req-cc1b916e-988e-436f-9d88-ed461c0c56c5 req-2c6cda53-d1eb-4c1f-9d63-7291c5527422 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] No waiting events found dispatching network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.745 189512 WARNING nova.compute.manager [req-cc1b916e-988e-436f-9d88-ed461c0c56c5 req-2c6cda53-d1eb-4c1f-9d63-7291c5527422 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received unexpected event network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 for instance with vm_state active and task_state None.#033[00m
Dec  1 23:00:27 compute-0 NetworkManager[56278]: <info>  [1764630027.7492] device (tapa139ed27-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 23:00:27 compute-0 ovn_controller[97770]: 2025-12-01T23:00:27Z|00162|binding|INFO|Releasing lport a139ed27-b785-495f-bc93-2f5daea46d42 from this chassis (sb_readonly=0)
Dec  1 23:00:27 compute-0 ovn_controller[97770]: 2025-12-01T23:00:27Z|00163|binding|INFO|Setting lport a139ed27-b785-495f-bc93-2f5daea46d42 down in Southbound
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.787 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:27 compute-0 ovn_controller[97770]: 2025-12-01T23:00:27Z|00164|binding|INFO|Removing iface tapa139ed27-b7 ovn-installed in OVS
Dec  1 23:00:27 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:27.800 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:3e:a0 10.100.0.6'], port_security=['fa:16:3e:b8:3e:a0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4d450663-4303-4535-bc1a-72996000c25a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c3d0516-109b-46fb-ab67-19206f614258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faa4919c58ee4a458bdb25fd4271bfde', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd06e5c87-dfe8-4629-aafa-87299e309e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebd388b8-c29a-49dc-9a3f-96f8cde4cd01, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=a139ed27-b785-495f-bc93-2f5daea46d42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:00:27 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:27.801 106662 INFO neutron.agent.ovn.metadata.agent [-] Port a139ed27-b785-495f-bc93-2f5daea46d42 in datapath 7c3d0516-109b-46fb-ab67-19206f614258 unbound from our chassis#033[00m
Dec  1 23:00:27 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:27.803 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c3d0516-109b-46fb-ab67-19206f614258, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.796 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:27 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:27.811 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d430ae1b-24d6-465c-bf65-c4a6d5e0e3c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:27 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:27.812 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 namespace which is not needed anymore#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.819 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:27 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  1 23:00:27 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000b.scope: Consumed 44.133s CPU time.
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.846 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:27 compute-0 systemd-machined[155759]: Machine qemu-14-instance-0000000b terminated.
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.952 189512 INFO nova.virt.libvirt.driver [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Instance destroyed successfully.#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.953 189512 DEBUG nova.objects.instance [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lazy-loading 'resources' on Instance uuid 4d450663-4303-4535-bc1a-72996000c25a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.981 189512 DEBUG nova.virt.libvirt.vif [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T22:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2091090341',display_name='tempest-ServerActionsTestJSON-server-2091090341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2091090341',id=11,image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+fzJbRUs6xTpBTH6qdTI6/Z5W+mGfJgDYfAUhpF05jRUFQOpZmqCMJhmfo4TTDAEYfG1aq/+blNkmuIybaiFy/eDEp+yVFf0iSiXkStUapi+PgaOcCydfsaALgr/g66Q==',key_name='tempest-keypair-87244995',keypairs=<?>,launch_index=0,launched_at=2025-12-01T22:58:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faa4919c58ee4a458bdb25fd4271bfde',ramdisk_id='',reservation_id='r-lf97gff3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='74bb08bf-1799-4930-aad4-d505f26ff5f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1483688623',owner_user_name='tempest-ServerActionsTestJSON-1483688623-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T22:59:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f27393706a734cf3bee31de08a363c23',uuid=4d450663-4303-4535-bc1a-72996000c25a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.981 189512 DEBUG nova.network.os_vif_util [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converting VIF {"id": "a139ed27-b785-495f-bc93-2f5daea46d42", "address": "fa:16:3e:b8:3e:a0", "network": {"id": "7c3d0516-109b-46fb-ab67-19206f614258", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-862758432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faa4919c58ee4a458bdb25fd4271bfde", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa139ed27-b7", "ovs_interfaceid": "a139ed27-b785-495f-bc93-2f5daea46d42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.982 189512 DEBUG nova.network.os_vif_util [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.983 189512 DEBUG os_vif [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.985 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.985 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa139ed27-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.987 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.989 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.991 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.994 189512 INFO os_vif [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:3e:a0,bridge_name='br-int',has_traffic_filtering=True,id=a139ed27-b785-495f-bc93-2f5daea46d42,network=Network(7c3d0516-109b-46fb-ab67-19206f614258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa139ed27-b7')#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.994 189512 INFO nova.virt.libvirt.driver [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Deleting instance files /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a_del#033[00m
Dec  1 23:00:27 compute-0 nova_compute[189508]: 2025-12-01 23:00:27.995 189512 INFO nova.virt.libvirt.driver [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Deletion of /var/lib/nova/instances/4d450663-4303-4535-bc1a-72996000c25a_del complete#033[00m
Dec  1 23:00:28 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[253921]: [NOTICE]   (253925) : haproxy version is 2.8.14-c23fe91
Dec  1 23:00:28 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[253921]: [NOTICE]   (253925) : path to executable is /usr/sbin/haproxy
Dec  1 23:00:28 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[253921]: [WARNING]  (253925) : Exiting Master process...
Dec  1 23:00:28 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[253921]: [ALERT]    (253925) : Current worker (253927) exited with code 143 (Terminated)
Dec  1 23:00:28 compute-0 neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258[253921]: [WARNING]  (253925) : All workers exited. Exiting... (0)
Dec  1 23:00:28 compute-0 systemd[1]: libpod-7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9.scope: Deactivated successfully.
Dec  1 23:00:28 compute-0 podman[254726]: 2025-12-01 23:00:28.0241543 +0000 UTC m=+0.072681152 container died 7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.053 189512 INFO nova.compute.manager [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.054 189512 DEBUG oslo.service.loopingcall [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.054 189512 DEBUG nova.compute.manager [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.055 189512 DEBUG nova.network.neutron [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 23:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9-userdata-shm.mount: Deactivated successfully.
Dec  1 23:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf4f6bf767e255704fb688284d252be9e7f43de80ef44c52678cab3cf827ed95-merged.mount: Deactivated successfully.
Dec  1 23:00:28 compute-0 podman[254726]: 2025-12-01 23:00:28.08797422 +0000 UTC m=+0.136501052 container cleanup 7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:00:28 compute-0 systemd[1]: libpod-conmon-7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9.scope: Deactivated successfully.
Dec  1 23:00:28 compute-0 podman[254759]: 2025-12-01 23:00:28.189680603 +0000 UTC m=+0.064821848 container remove 7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.204 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[6c023c95-8dcb-4140-974e-2936ecef832e]: (4, ('Mon Dec  1 11:00:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 (7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9)\n7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9\nMon Dec  1 11:00:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 (7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9)\n7536e6748e22aec87984fc0b6d5d2d869c6fbde789d182d8081aa7dc9f7df2a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.208 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[967d3bc6-eb73-4f92-9976-6ab030a44a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.209 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c3d0516-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:00:28 compute-0 kernel: tap7c3d0516-10: left promiscuous mode
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.211 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.214 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.225 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b9589f4c-a073-4c36-aad8-399f057ec1d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.244 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.244 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[e960e827-d04a-4db1-918c-318ec289f6d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.246 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[eb846fd9-937c-4a1c-b60f-bc912733e290]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.270 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[69b83e3e-94fb-4d21-bc56-ac7de1b4439e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547127, 'reachable_time': 27301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254774, 'error': None, 'target': 'ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c3d0516\x2d109b\x2d46fb\x2dab67\x2d19206f614258.mount: Deactivated successfully.
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.273 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c3d0516-109b-46fb-ab67-19206f614258 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 23:00:28 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:00:28.273 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[32881b95-a3bc-422b-a982-3d81b0b4d65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:00:28 compute-0 nova_compute[189508]: 2025-12-01 23:00:28.989 189512 DEBUG nova.network.neutron [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.011 189512 INFO nova.compute.manager [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Took 0.96 seconds to deallocate network for instance.#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.069 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.069 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.174 189512 DEBUG nova.compute.provider_tree [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.191 189512 DEBUG nova.scheduler.client.report [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.212 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.247 189512 INFO nova.scheduler.client.report [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Deleted allocations for instance 4d450663-4303-4535-bc1a-72996000c25a#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.329 189512 DEBUG oslo_concurrency.lockutils [None req-01b17d04-b64d-4289-8f58-d56c4bcbf3ea f27393706a734cf3bee31de08a363c23 faa4919c58ee4a458bdb25fd4271bfde - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:29 compute-0 podman[203693]: time="2025-12-01T23:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:00:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:00:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.835 189512 DEBUG nova.compute.manager [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-unplugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.836 189512 DEBUG oslo_concurrency.lockutils [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.836 189512 DEBUG oslo_concurrency.lockutils [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.836 189512 DEBUG oslo_concurrency.lockutils [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.837 189512 DEBUG nova.compute.manager [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] No waiting events found dispatching network-vif-unplugged-a139ed27-b785-495f-bc93-2f5daea46d42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.837 189512 WARNING nova.compute.manager [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received unexpected event network-vif-unplugged-a139ed27-b785-495f-bc93-2f5daea46d42 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.837 189512 DEBUG nova.compute.manager [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.837 189512 DEBUG oslo_concurrency.lockutils [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "4d450663-4303-4535-bc1a-72996000c25a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.838 189512 DEBUG oslo_concurrency.lockutils [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.838 189512 DEBUG oslo_concurrency.lockutils [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "4d450663-4303-4535-bc1a-72996000c25a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.838 189512 DEBUG nova.compute.manager [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] No waiting events found dispatching network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.839 189512 WARNING nova.compute.manager [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received unexpected event network-vif-plugged-a139ed27-b785-495f-bc93-2f5daea46d42 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 23:00:29 compute-0 nova_compute[189508]: 2025-12-01 23:00:29.839 189512 DEBUG nova.compute.manager [req-3784db14-8674-4651-a86f-9e51ae5da4d2 req-ed5c0190-d8fe-4036-8336-0bf26e417d8e c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Received event network-vif-deleted-a139ed27-b785-495f-bc93-2f5daea46d42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:00:30 compute-0 nova_compute[189508]: 2025-12-01 23:00:30.633 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:31 compute-0 openstack_network_exporter[205887]: ERROR   23:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:00:31 compute-0 openstack_network_exporter[205887]: ERROR   23:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:00:31 compute-0 openstack_network_exporter[205887]: ERROR   23:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:00:31 compute-0 openstack_network_exporter[205887]: ERROR   23:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:00:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:00:31 compute-0 openstack_network_exporter[205887]: ERROR   23:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:00:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:00:32 compute-0 nova_compute[189508]: 2025-12-01 23:00:32.987 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:33 compute-0 ovn_controller[97770]: 2025-12-01T23:00:33Z|00165|binding|INFO|Releasing lport 6cd00ec7-5de6-4094-b01c-8ff2beea0431 from this chassis (sb_readonly=0)
Dec  1 23:00:33 compute-0 nova_compute[189508]: 2025-12-01 23:00:33.216 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:33 compute-0 ovn_controller[97770]: 2025-12-01T23:00:33Z|00166|binding|INFO|Releasing lport 6cd00ec7-5de6-4094-b01c-8ff2beea0431 from this chassis (sb_readonly=0)
Dec  1 23:00:33 compute-0 nova_compute[189508]: 2025-12-01 23:00:33.396 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:33 compute-0 podman[254776]: 2025-12-01 23:00:33.855558208 +0000 UTC m=+0.118984315 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:00:35 compute-0 nova_compute[189508]: 2025-12-01 23:00:35.635 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:36 compute-0 podman[254800]: 2025-12-01 23:00:36.838647773 +0000 UTC m=+0.121339242 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 23:00:37 compute-0 podman[254820]: 2025-12-01 23:00:37.803245484 +0000 UTC m=+0.079129254 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:00:37 compute-0 nova_compute[189508]: 2025-12-01 23:00:37.990 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:40 compute-0 nova_compute[189508]: 2025-12-01 23:00:40.639 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:42 compute-0 nova_compute[189508]: 2025-12-01 23:00:42.949 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764630027.9477077, 4d450663-4303-4535-bc1a-72996000c25a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:00:42 compute-0 nova_compute[189508]: 2025-12-01 23:00:42.950 189512 INFO nova.compute.manager [-] [instance: 4d450663-4303-4535-bc1a-72996000c25a] VM Stopped (Lifecycle Event)#033[00m
Dec  1 23:00:42 compute-0 nova_compute[189508]: 2025-12-01 23:00:42.982 189512 DEBUG nova.compute.manager [None req-ebe202e0-bff7-487e-bd9b-548d2e07078f - - - - - -] [instance: 4d450663-4303-4535-bc1a-72996000c25a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:00:42 compute-0 nova_compute[189508]: 2025-12-01 23:00:42.994 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:43 compute-0 podman[254840]: 2025-12-01 23:00:43.783003489 +0000 UTC m=+0.066189427 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  1 23:00:43 compute-0 podman[254839]: 2025-12-01 23:00:43.852662805 +0000 UTC m=+0.128258978 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 23:00:45 compute-0 nova_compute[189508]: 2025-12-01 23:00:45.641 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:48 compute-0 nova_compute[189508]: 2025-12-01 23:00:47.999 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:49 compute-0 nova_compute[189508]: 2025-12-01 23:00:49.263 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:49 compute-0 nova_compute[189508]: 2025-12-01 23:00:49.264 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:49 compute-0 podman[254885]: 2025-12-01 23:00:49.831696339 +0000 UTC m=+0.108822136 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:00:49 compute-0 podman[254888]: 2025-12-01 23:00:49.838093801 +0000 UTC m=+0.109835796 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:00:49 compute-0 podman[254886]: 2025-12-01 23:00:49.857307945 +0000 UTC m=+0.133370672 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 23:00:49 compute-0 podman[254887]: 2025-12-01 23:00:49.864430697 +0000 UTC m=+0.141103512 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Dec  1 23:00:50 compute-0 nova_compute[189508]: 2025-12-01 23:00:50.644 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:53 compute-0 nova_compute[189508]: 2025-12-01 23:00:53.002 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:55 compute-0 nova_compute[189508]: 2025-12-01 23:00:55.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:55 compute-0 nova_compute[189508]: 2025-12-01 23:00:55.646 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:57 compute-0 nova_compute[189508]: 2025-12-01 23:00:57.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:58 compute-0 nova_compute[189508]: 2025-12-01 23:00:58.004 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:00:58 compute-0 nova_compute[189508]: 2025-12-01 23:00:58.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:58 compute-0 nova_compute[189508]: 2025-12-01 23:00:58.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:00:58 compute-0 nova_compute[189508]: 2025-12-01 23:00:58.243 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:00:58 compute-0 nova_compute[189508]: 2025-12-01 23:00:58.244 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:58 compute-0 nova_compute[189508]: 2025-12-01 23:00:58.244 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:58 compute-0 nova_compute[189508]: 2025-12-01 23:00:58.244 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:00:58 compute-0 ovn_controller[97770]: 2025-12-01T23:00:58Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:86:00 10.100.2.225
Dec  1 23:00:58 compute-0 ovn_controller[97770]: 2025-12-01T23:00:58Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:86:00 10.100.2.225
Dec  1 23:00:59 compute-0 nova_compute[189508]: 2025-12-01 23:00:59.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:59 compute-0 nova_compute[189508]: 2025-12-01 23:00:59.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:00:59 compute-0 podman[203693]: time="2025-12-01T23:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:00:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:00:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Dec  1 23:01:00 compute-0 nova_compute[189508]: 2025-12-01 23:01:00.649 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:01 compute-0 openstack_network_exporter[205887]: ERROR   23:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:01:01 compute-0 openstack_network_exporter[205887]: ERROR   23:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:01:01 compute-0 openstack_network_exporter[205887]: ERROR   23:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:01:01 compute-0 openstack_network_exporter[205887]: ERROR   23:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:01:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:01:01 compute-0 openstack_network_exporter[205887]: ERROR   23:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:01:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.009 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.237 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.238 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.239 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.351 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.443 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.444 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.505 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:01:03 compute-0 ovn_controller[97770]: 2025-12-01T23:01:03Z|00167|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.829 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.831 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=72.09539413452148GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.832 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.833 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.923 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.924 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.925 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.981 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:01:03 compute-0 nova_compute[189508]: 2025-12-01 23:01:03.999 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:01:04 compute-0 nova_compute[189508]: 2025-12-01 23:01:04.017 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:01:04 compute-0 nova_compute[189508]: 2025-12-01 23:01:04.018 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:01:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:01:04.643 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:01:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:01:04.643 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:01:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:01:04.644 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:01:04 compute-0 podman[254991]: 2025-12-01 23:01:04.832480284 +0000 UTC m=+0.096937879 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:01:05 compute-0 nova_compute[189508]: 2025-12-01 23:01:05.652 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:07 compute-0 podman[255014]: 2025-12-01 23:01:07.844905551 +0000 UTC m=+0.120629591 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 23:01:07 compute-0 podman[255033]: 2025-12-01 23:01:07.939108362 +0000 UTC m=+0.069304636 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 23:01:08 compute-0 nova_compute[189508]: 2025-12-01 23:01:08.012 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:10 compute-0 nova_compute[189508]: 2025-12-01 23:01:10.656 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:13 compute-0 nova_compute[189508]: 2025-12-01 23:01:13.015 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:14 compute-0 podman[255053]: 2025-12-01 23:01:14.785768628 +0000 UTC m=+0.114501286 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:01:14 compute-0 podman[255054]: 2025-12-01 23:01:14.797868262 +0000 UTC m=+0.125263962 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:01:15 compute-0 nova_compute[189508]: 2025-12-01 23:01:15.659 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:18 compute-0 nova_compute[189508]: 2025-12-01 23:01:18.019 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:20 compute-0 nova_compute[189508]: 2025-12-01 23:01:20.661 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:20 compute-0 podman[255109]: 2025-12-01 23:01:20.843481935 +0000 UTC m=+0.091091494 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 23:01:20 compute-0 podman[255096]: 2025-12-01 23:01:20.846584753 +0000 UTC m=+0.124493861 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:01:20 compute-0 podman[255097]: 2025-12-01 23:01:20.856558786 +0000 UTC m=+0.129872984 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  1 23:01:20 compute-0 podman[255102]: 2025-12-01 23:01:20.864954104 +0000 UTC m=+0.119699595 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Dec  1 23:01:23 compute-0 nova_compute[189508]: 2025-12-01 23:01:23.023 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:25 compute-0 nova_compute[189508]: 2025-12-01 23:01:25.665 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:28 compute-0 nova_compute[189508]: 2025-12-01 23:01:28.029 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:29 compute-0 podman[203693]: time="2025-12-01T23:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:01:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:01:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 23:01:30 compute-0 nova_compute[189508]: 2025-12-01 23:01:30.668 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:31 compute-0 openstack_network_exporter[205887]: ERROR   23:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:01:31 compute-0 openstack_network_exporter[205887]: ERROR   23:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:01:31 compute-0 openstack_network_exporter[205887]: ERROR   23:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:01:31 compute-0 openstack_network_exporter[205887]: ERROR   23:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:01:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:01:31 compute-0 openstack_network_exporter[205887]: ERROR   23:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:01:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:01:33 compute-0 nova_compute[189508]: 2025-12-01 23:01:33.033 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.276 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.277 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.282 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 91dfa889-2ab6-4683-bc07-870d2df30bdd from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c09e4230>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.284 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/91dfa889-2ab6-4683-bc07-870d2df30bdd -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 23:01:35 compute-0 nova_compute[189508]: 2025-12-01 23:01:35.670 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.719 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Mon, 01 Dec 2025 23:01:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6ffaa03e-b984-4ff7-81fd-4f88b3522a02 x-openstack-request-id: req-6ffaa03e-b984-4ff7-81fd-4f88b3522a02 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.719 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "91dfa889-2ab6-4683-bc07-870d2df30bdd", "name": "te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh", "status": "ACTIVE", "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "user_id": "31117d25a4e94964a6d197de21b13cbe", "metadata": {"metering.server_group": "3dac0f46-9f79-460b-b6c5-9876493d569a"}, "hostId": "6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90", "image": {"id": "ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793"}]}, "flavor": {"id": "2e42a55e-71e2-4041-8ca2-725d63f058bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2e42a55e-71e2-4041-8ca2-725d63f058bf"}]}, "created": "2025-12-01T23:00:18Z", "updated": "2025-12-01T23:00:25Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.225", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c3:86:00"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/91dfa889-2ab6-4683-bc07-870d2df30bdd"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/91dfa889-2ab6-4683-bc07-870d2df30bdd"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T23:00:25.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.719 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/91dfa889-2ab6-4683-bc07-870d2df30bdd used request id req-6ffaa03e-b984-4ff7-81fd-4f88b3522a02 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.720 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'name': 'te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T23:01:35.721245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.726 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 91dfa889-2ab6-4683-bc07-870d2df30bdd / tap0eb5530e-04 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.726 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.727 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T23:01:35.727009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T23:01:35.728273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.729 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T23:01:35.729213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.745 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.745 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.746 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.746 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T23:01:35.746834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.786 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.787 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.787 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.788 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.788 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.788 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.788 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.788 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 683363039 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.788 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 52138549 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T23:01:35.788318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.789 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.789 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.789 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.789 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.790 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.790 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T23:01:35.789915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.792 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.792 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.792 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.792 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.792 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.793 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.793 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.793 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.794 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.794 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T23:01:35.793038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.794 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.794 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.794 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.795 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T23:01:35.794686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.795 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.795 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.795 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.796 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.796 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 72818688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.796 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.797 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.797 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.797 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 3966506162 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.797 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T23:01:35.796213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T23:01:35.797387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T23:01:35.798686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.823 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/cpu volume: 68480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.823 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.824 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.824 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T23:01:35.824188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T23:01:35.825155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T23:01:35.826067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.827 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.827 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T23:01:35.828192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh>]
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T23:01:35.829097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T23:01:35.829985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.831 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.831 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.831 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.831 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.831 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.833 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.833 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T23:01:35.831757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T23:01:35.832540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T23:01:35.833246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T23:01:35.834414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T23:01:35.835177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/memory.usage volume: 43.69921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh>]
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.837 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.838 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T23:01:35.835967) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.839 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T23:01:35.836925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T23:01:35.837693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.840 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.841 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.841 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.841 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.841 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.841 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.841 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.842 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.842 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.842 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.842 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:01:35.842 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:01:35 compute-0 podman[255173]: 2025-12-01 23:01:35.850649022 +0000 UTC m=+0.112765578 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:01:38 compute-0 nova_compute[189508]: 2025-12-01 23:01:38.036 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:38 compute-0 podman[255196]: 2025-12-01 23:01:38.84687538 +0000 UTC m=+0.124212123 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:01:38 compute-0 podman[255197]: 2025-12-01 23:01:38.892905525 +0000 UTC m=+0.163479237 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  1 23:01:40 compute-0 nova_compute[189508]: 2025-12-01 23:01:40.672 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:43 compute-0 nova_compute[189508]: 2025-12-01 23:01:43.038 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:45 compute-0 nova_compute[189508]: 2025-12-01 23:01:45.674 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:45 compute-0 podman[255233]: 2025-12-01 23:01:45.829628235 +0000 UTC m=+0.100442249 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 23:01:45 compute-0 podman[255232]: 2025-12-01 23:01:45.856931889 +0000 UTC m=+0.135312858 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 23:01:48 compute-0 nova_compute[189508]: 2025-12-01 23:01:48.015 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:01:48 compute-0 nova_compute[189508]: 2025-12-01 23:01:48.040 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:50 compute-0 nova_compute[189508]: 2025-12-01 23:01:50.676 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:51 compute-0 podman[255274]: 2025-12-01 23:01:51.783343811 +0000 UTC m=+0.065749865 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:01:51 compute-0 podman[255275]: 2025-12-01 23:01:51.791238385 +0000 UTC m=+0.067901796 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 23:01:51 compute-0 podman[255276]: 2025-12-01 23:01:51.810855642 +0000 UTC m=+0.080467403 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:01:51 compute-0 podman[255282]: 2025-12-01 23:01:51.831939799 +0000 UTC m=+0.098890215 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543)
Dec  1 23:01:53 compute-0 nova_compute[189508]: 2025-12-01 23:01:53.044 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:55 compute-0 nova_compute[189508]: 2025-12-01 23:01:55.679 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:57 compute-0 nova_compute[189508]: 2025-12-01 23:01:57.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:01:58 compute-0 nova_compute[189508]: 2025-12-01 23:01:58.047 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:01:58 compute-0 nova_compute[189508]: 2025-12-01 23:01:58.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:01:58 compute-0 nova_compute[189508]: 2025-12-01 23:01:58.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:01:59 compute-0 nova_compute[189508]: 2025-12-01 23:01:59.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:01:59 compute-0 nova_compute[189508]: 2025-12-01 23:01:59.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:01:59 compute-0 nova_compute[189508]: 2025-12-01 23:01:59.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:01:59 compute-0 nova_compute[189508]: 2025-12-01 23:01:59.409 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:01:59 compute-0 nova_compute[189508]: 2025-12-01 23:01:59.410 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:01:59 compute-0 nova_compute[189508]: 2025-12-01 23:01:59.411 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:01:59 compute-0 nova_compute[189508]: 2025-12-01 23:01:59.411 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:01:59 compute-0 podman[203693]: time="2025-12-01T23:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:01:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:01:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 23:02:00 compute-0 nova_compute[189508]: 2025-12-01 23:02:00.682 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:01 compute-0 nova_compute[189508]: 2025-12-01 23:02:01.356 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:02:01 compute-0 nova_compute[189508]: 2025-12-01 23:02:01.381 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:02:01 compute-0 nova_compute[189508]: 2025-12-01 23:02:01.382 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:02:01 compute-0 nova_compute[189508]: 2025-12-01 23:02:01.383 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:01 compute-0 nova_compute[189508]: 2025-12-01 23:02:01.384 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:01 compute-0 nova_compute[189508]: 2025-12-01 23:02:01.385 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:01 compute-0 nova_compute[189508]: 2025-12-01 23:02:01.386 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:02:01 compute-0 openstack_network_exporter[205887]: ERROR   23:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:02:01 compute-0 openstack_network_exporter[205887]: ERROR   23:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:02:01 compute-0 openstack_network_exporter[205887]: ERROR   23:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:02:01 compute-0 openstack_network_exporter[205887]: ERROR   23:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:02:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:02:01 compute-0 openstack_network_exporter[205887]: ERROR   23:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:02:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.051 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.227 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.228 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.228 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.229 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.319 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.414 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.415 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.476 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.844 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.847 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=72.09563827514648GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.848 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.848 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.934 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.934 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.935 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:02:03 compute-0 nova_compute[189508]: 2025-12-01 23:02:03.997 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:02:04 compute-0 nova_compute[189508]: 2025-12-01 23:02:04.014 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:02:04 compute-0 nova_compute[189508]: 2025-12-01 23:02:04.017 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:02:04 compute-0 nova_compute[189508]: 2025-12-01 23:02:04.017 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:02:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:02:04.643 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:02:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:02:04.644 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:02:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:02:04.644 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:02:05 compute-0 nova_compute[189508]: 2025-12-01 23:02:05.683 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:06 compute-0 podman[255366]: 2025-12-01 23:02:06.863032355 +0000 UTC m=+0.132496298 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:02:08 compute-0 nova_compute[189508]: 2025-12-01 23:02:08.054 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:09 compute-0 podman[255390]: 2025-12-01 23:02:09.860186439 +0000 UTC m=+0.124202173 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:02:09 compute-0 podman[255391]: 2025-12-01 23:02:09.861112695 +0000 UTC m=+0.117298057 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 23:02:10 compute-0 nova_compute[189508]: 2025-12-01 23:02:10.685 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:13 compute-0 nova_compute[189508]: 2025-12-01 23:02:13.058 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:15 compute-0 nova_compute[189508]: 2025-12-01 23:02:15.687 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:16 compute-0 podman[255431]: 2025-12-01 23:02:16.837058467 +0000 UTC m=+0.092562876 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 23:02:16 compute-0 podman[255430]: 2025-12-01 23:02:16.902717519 +0000 UTC m=+0.164258259 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:02:18 compute-0 nova_compute[189508]: 2025-12-01 23:02:18.061 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.201 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.203 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.205 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.206 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.207 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.208 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.230 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.249 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.249 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Image id ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793 yields fingerprint 592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.250 189512 INFO nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] image ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793 at (/var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa): checking#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.250 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] image ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793 at (/var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.253 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.253 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] 91dfa889-2ab6-4683-bc07-870d2df30bdd is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.254 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] 91dfa889-2ab6-4683-bc07-870d2df30bdd has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.254 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.359 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.360 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd is backed by 592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.361 189512 WARNING nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.362 189512 WARNING nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Unknown base file: /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.363 189512 WARNING nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.363 189512 INFO nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Active base files: /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.364 189512 INFO nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Removable base files: /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781 /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.365 189512 INFO nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9c3ca1997acb58c7aa0cee513cca827b62b8612e#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.366 189512 INFO nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/31f03d99bbb3a67ef4cd2051c7debc5a0d1bc781#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.367 189512 INFO nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c8f11fbe7b2f7582cabaf6cce8cb01ed142ef270#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.367 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.368 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.368 189512 DEBUG nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.369 189512 INFO nova.virt.libvirt.imagecache [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  1 23:02:20 compute-0 nova_compute[189508]: 2025-12-01 23:02:20.702 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:22 compute-0 podman[255476]: 2025-12-01 23:02:22.841883413 +0000 UTC m=+0.101328484 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:02:22 compute-0 podman[255477]: 2025-12-01 23:02:22.842876531 +0000 UTC m=+0.085663630 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:02:22 compute-0 podman[255478]: 2025-12-01 23:02:22.855136198 +0000 UTC m=+0.096117116 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, version=9.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec  1 23:02:22 compute-0 podman[255475]: 2025-12-01 23:02:22.857837235 +0000 UTC m=+0.119687065 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:02:23 compute-0 nova_compute[189508]: 2025-12-01 23:02:23.063 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:25 compute-0 nova_compute[189508]: 2025-12-01 23:02:25.692 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:27 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 23:02:28 compute-0 nova_compute[189508]: 2025-12-01 23:02:28.067 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:29 compute-0 podman[203693]: time="2025-12-01T23:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:02:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:02:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  1 23:02:30 compute-0 nova_compute[189508]: 2025-12-01 23:02:30.695 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:31 compute-0 openstack_network_exporter[205887]: ERROR   23:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:02:31 compute-0 openstack_network_exporter[205887]: ERROR   23:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:02:31 compute-0 openstack_network_exporter[205887]: ERROR   23:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:02:31 compute-0 openstack_network_exporter[205887]: ERROR   23:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:02:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:02:31 compute-0 openstack_network_exporter[205887]: ERROR   23:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:02:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:02:33 compute-0 nova_compute[189508]: 2025-12-01 23:02:33.312 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:35 compute-0 nova_compute[189508]: 2025-12-01 23:02:35.698 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:37 compute-0 podman[255557]: 2025-12-01 23:02:37.807318866 +0000 UTC m=+0.089311303 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:02:38 compute-0 nova_compute[189508]: 2025-12-01 23:02:38.315 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:40 compute-0 nova_compute[189508]: 2025-12-01 23:02:40.699 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:40 compute-0 podman[255580]: 2025-12-01 23:02:40.802664279 +0000 UTC m=+0.083052856 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 23:02:40 compute-0 podman[255581]: 2025-12-01 23:02:40.825160907 +0000 UTC m=+0.099194624 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 23:02:43 compute-0 nova_compute[189508]: 2025-12-01 23:02:43.319 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:45 compute-0 nova_compute[189508]: 2025-12-01 23:02:45.703 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:47 compute-0 nova_compute[189508]: 2025-12-01 23:02:47.364 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:47 compute-0 podman[255616]: 2025-12-01 23:02:47.843710087 +0000 UTC m=+0.111773359 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 23:02:47 compute-0 podman[255615]: 2025-12-01 23:02:47.931451425 +0000 UTC m=+0.201052361 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 23:02:48 compute-0 nova_compute[189508]: 2025-12-01 23:02:48.322 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:50 compute-0 nova_compute[189508]: 2025-12-01 23:02:50.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:50 compute-0 nova_compute[189508]: 2025-12-01 23:02:50.704 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:53 compute-0 nova_compute[189508]: 2025-12-01 23:02:53.325 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:53 compute-0 podman[255655]: 2025-12-01 23:02:53.77297013 +0000 UTC m=+0.056989267 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:02:53 compute-0 podman[255656]: 2025-12-01 23:02:53.806229814 +0000 UTC m=+0.086683029 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 23:02:53 compute-0 podman[255658]: 2025-12-01 23:02:53.80893973 +0000 UTC m=+0.079869785 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4)
Dec  1 23:02:53 compute-0 podman[255657]: 2025-12-01 23:02:53.821935969 +0000 UTC m=+0.097198187 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 23:02:55 compute-0 nova_compute[189508]: 2025-12-01 23:02:55.708 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:58 compute-0 nova_compute[189508]: 2025-12-01 23:02:58.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:58 compute-0 nova_compute[189508]: 2025-12-01 23:02:58.328 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:02:59 compute-0 nova_compute[189508]: 2025-12-01 23:02:59.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:02:59 compute-0 nova_compute[189508]: 2025-12-01 23:02:59.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:02:59 compute-0 nova_compute[189508]: 2025-12-01 23:02:59.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:02:59 compute-0 nova_compute[189508]: 2025-12-01 23:02:59.581 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:02:59 compute-0 nova_compute[189508]: 2025-12-01 23:02:59.582 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:02:59 compute-0 nova_compute[189508]: 2025-12-01 23:02:59.582 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:02:59 compute-0 nova_compute[189508]: 2025-12-01 23:02:59.583 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:02:59 compute-0 podman[203693]: time="2025-12-01T23:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:02:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:02:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 23:03:00 compute-0 nova_compute[189508]: 2025-12-01 23:03:00.709 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:00 compute-0 nova_compute[189508]: 2025-12-01 23:03:00.994 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:03:01 compute-0 nova_compute[189508]: 2025-12-01 23:03:01.012 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:03:01 compute-0 nova_compute[189508]: 2025-12-01 23:03:01.013 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:03:01 compute-0 nova_compute[189508]: 2025-12-01 23:03:01.014 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:01 compute-0 nova_compute[189508]: 2025-12-01 23:03:01.015 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:01 compute-0 nova_compute[189508]: 2025-12-01 23:03:01.016 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:01 compute-0 nova_compute[189508]: 2025-12-01 23:03:01.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:01 compute-0 nova_compute[189508]: 2025-12-01 23:03:01.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:03:01 compute-0 openstack_network_exporter[205887]: ERROR   23:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:03:01 compute-0 openstack_network_exporter[205887]: ERROR   23:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:03:01 compute-0 openstack_network_exporter[205887]: ERROR   23:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:03:01 compute-0 openstack_network_exporter[205887]: ERROR   23:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:03:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:03:01 compute-0 openstack_network_exporter[205887]: ERROR   23:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:03:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:03:02 compute-0 nova_compute[189508]: 2025-12-01 23:03:02.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:03 compute-0 nova_compute[189508]: 2025-12-01 23:03:03.331 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.236 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.236 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.236 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.333 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.441 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.442 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.497 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:03:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:03:04.645 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:03:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:03:04.645 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:03:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:03:04.646 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.845 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.848 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=72.09563446044922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.849 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:03:04 compute-0 nova_compute[189508]: 2025-12-01 23:03:04.850 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.087 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.087 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.088 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.156 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.230 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.231 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.248 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.282 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.329 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.351 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.354 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.354 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:03:05 compute-0 nova_compute[189508]: 2025-12-01 23:03:05.712 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:08 compute-0 nova_compute[189508]: 2025-12-01 23:03:08.333 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:08 compute-0 podman[255741]: 2025-12-01 23:03:08.796544593 +0000 UTC m=+0.069780510 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:03:10 compute-0 nova_compute[189508]: 2025-12-01 23:03:10.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:10 compute-0 nova_compute[189508]: 2025-12-01 23:03:10.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 23:03:10 compute-0 nova_compute[189508]: 2025-12-01 23:03:10.216 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 23:03:10 compute-0 nova_compute[189508]: 2025-12-01 23:03:10.718 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:11 compute-0 podman[255764]: 2025-12-01 23:03:11.792975506 +0000 UTC m=+0.071922810 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 23:03:11 compute-0 podman[255765]: 2025-12-01 23:03:11.802585629 +0000 UTC m=+0.072204599 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 23:03:13 compute-0 nova_compute[189508]: 2025-12-01 23:03:13.338 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:15 compute-0 nova_compute[189508]: 2025-12-01 23:03:15.717 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:16 compute-0 nova_compute[189508]: 2025-12-01 23:03:16.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:18 compute-0 nova_compute[189508]: 2025-12-01 23:03:18.340 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:18 compute-0 podman[255802]: 2025-12-01 23:03:18.829849926 +0000 UTC m=+0.098480523 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 23:03:18 compute-0 podman[255801]: 2025-12-01 23:03:18.839377406 +0000 UTC m=+0.108773765 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 23:03:20 compute-0 nova_compute[189508]: 2025-12-01 23:03:20.216 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:20 compute-0 nova_compute[189508]: 2025-12-01 23:03:20.217 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 23:03:20 compute-0 nova_compute[189508]: 2025-12-01 23:03:20.719 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:21 compute-0 nova_compute[189508]: 2025-12-01 23:03:21.052 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:21 compute-0 nova_compute[189508]: 2025-12-01 23:03:21.086 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 23:03:21 compute-0 nova_compute[189508]: 2025-12-01 23:03:21.087 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:03:21 compute-0 nova_compute[189508]: 2025-12-01 23:03:21.088 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:03:21 compute-0 nova_compute[189508]: 2025-12-01 23:03:21.127 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:03:23 compute-0 nova_compute[189508]: 2025-12-01 23:03:23.342 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:24 compute-0 podman[255847]: 2025-12-01 23:03:24.842924536 +0000 UTC m=+0.104404832 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:03:24 compute-0 podman[255848]: 2025-12-01 23:03:24.852694833 +0000 UTC m=+0.114038955 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:03:24 compute-0 podman[255849]: 2025-12-01 23:03:24.861837272 +0000 UTC m=+0.115027943 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, config_id=edpm, vcs-type=git, architecture=x86_64, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  1 23:03:24 compute-0 podman[255846]: 2025-12-01 23:03:24.862627584 +0000 UTC m=+0.128412532 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:03:25 compute-0 nova_compute[189508]: 2025-12-01 23:03:25.722 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:28 compute-0 nova_compute[189508]: 2025-12-01 23:03:28.344 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:29 compute-0 podman[203693]: time="2025-12-01T23:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:03:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:03:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 23:03:30 compute-0 nova_compute[189508]: 2025-12-01 23:03:30.725 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:31 compute-0 openstack_network_exporter[205887]: ERROR   23:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:03:31 compute-0 openstack_network_exporter[205887]: ERROR   23:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:03:31 compute-0 openstack_network_exporter[205887]: ERROR   23:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:03:31 compute-0 openstack_network_exporter[205887]: ERROR   23:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:03:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:03:31 compute-0 openstack_network_exporter[205887]: ERROR   23:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:03:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:03:33 compute-0 nova_compute[189508]: 2025-12-01 23:03:33.346 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.276 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.276 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.287 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'name': 'te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.288 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T23:03:35.289016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.296 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.299 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T23:03:35.299111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.301 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.302 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T23:03:35.302207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.303 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T23:03:35.303789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.324 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.324 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.325 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T23:03:35.325685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.375 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.375 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 683363039 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 52138549 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T23:03:35.376260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.377 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T23:03:35.377424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.378 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.379 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.379 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.379 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T23:03:35.378627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.380 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.380 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.381 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.381 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 72867840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T23:03:35.380035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.381 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T23:03:35.381443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.382 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 3988333589 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.383 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.383 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T23:03:35.382826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T23:03:35.384187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.403 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/cpu volume: 187770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.403 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.404 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.404 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.405 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T23:03:35.404147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.406 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T23:03:35.405364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.406 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.406 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T23:03:35.406504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.408 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T23:03:35.407928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T23:03:35.408910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.410 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.411 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.412 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T23:03:35.409782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T23:03:35.410703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T23:03:35.411754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T23:03:35.412900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.414 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/memory.usage volume: 43.69921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T23:03:35.414006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.415 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T23:03:35.415037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.416 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T23:03:35.416122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:03:35.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:03:35 compute-0 nova_compute[189508]: 2025-12-01 23:03:35.727 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:38 compute-0 nova_compute[189508]: 2025-12-01 23:03:38.348 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:39 compute-0 podman[255924]: 2025-12-01 23:03:39.771229577 +0000 UTC m=+0.058725127 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:03:40 compute-0 nova_compute[189508]: 2025-12-01 23:03:40.731 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:42 compute-0 podman[255947]: 2025-12-01 23:03:42.833866877 +0000 UTC m=+0.116102163 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:03:42 compute-0 podman[255948]: 2025-12-01 23:03:42.845225999 +0000 UTC m=+0.116310819 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 23:03:43 compute-0 nova_compute[189508]: 2025-12-01 23:03:43.351 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:45 compute-0 nova_compute[189508]: 2025-12-01 23:03:45.738 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:47 compute-0 nova_compute[189508]: 2025-12-01 23:03:47.230 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:48 compute-0 nova_compute[189508]: 2025-12-01 23:03:48.355 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:49 compute-0 podman[255986]: 2025-12-01 23:03:49.809150841 +0000 UTC m=+0.085249939 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 23:03:49 compute-0 podman[255985]: 2025-12-01 23:03:49.826787071 +0000 UTC m=+0.104588447 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 23:03:50 compute-0 nova_compute[189508]: 2025-12-01 23:03:50.746 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:53 compute-0 nova_compute[189508]: 2025-12-01 23:03:53.360 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:55 compute-0 nova_compute[189508]: 2025-12-01 23:03:55.751 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:55 compute-0 podman[256030]: 2025-12-01 23:03:55.834488048 +0000 UTC m=+0.101461437 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 23:03:55 compute-0 podman[256031]: 2025-12-01 23:03:55.841904988 +0000 UTC m=+0.098219225 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Dec  1 23:03:55 compute-0 podman[256037]: 2025-12-01 23:03:55.868361179 +0000 UTC m=+0.109639180 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc.)
Dec  1 23:03:55 compute-0 podman[256029]: 2025-12-01 23:03:55.870554441 +0000 UTC m=+0.142266134 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:03:58 compute-0 nova_compute[189508]: 2025-12-01 23:03:58.363 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:03:59 compute-0 nova_compute[189508]: 2025-12-01 23:03:59.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:03:59 compute-0 nova_compute[189508]: 2025-12-01 23:03:59.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:03:59 compute-0 nova_compute[189508]: 2025-12-01 23:03:59.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:03:59 compute-0 nova_compute[189508]: 2025-12-01 23:03:59.402 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:03:59 compute-0 nova_compute[189508]: 2025-12-01 23:03:59.403 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:03:59 compute-0 nova_compute[189508]: 2025-12-01 23:03:59.404 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:03:59 compute-0 nova_compute[189508]: 2025-12-01 23:03:59.404 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:03:59 compute-0 podman[203693]: time="2025-12-01T23:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:03:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:03:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 23:04:00 compute-0 nova_compute[189508]: 2025-12-01 23:04:00.752 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:00 compute-0 nova_compute[189508]: 2025-12-01 23:04:00.778 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:04:00 compute-0 nova_compute[189508]: 2025-12-01 23:04:00.792 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:04:00 compute-0 nova_compute[189508]: 2025-12-01 23:04:00.793 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:04:00 compute-0 nova_compute[189508]: 2025-12-01 23:04:00.794 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:00 compute-0 nova_compute[189508]: 2025-12-01 23:04:00.794 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:01 compute-0 nova_compute[189508]: 2025-12-01 23:04:01.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:01 compute-0 nova_compute[189508]: 2025-12-01 23:04:01.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:01 compute-0 nova_compute[189508]: 2025-12-01 23:04:01.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:01 compute-0 nova_compute[189508]: 2025-12-01 23:04:01.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:04:01 compute-0 openstack_network_exporter[205887]: ERROR   23:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:04:01 compute-0 openstack_network_exporter[205887]: ERROR   23:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:04:01 compute-0 openstack_network_exporter[205887]: ERROR   23:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:04:01 compute-0 openstack_network_exporter[205887]: ERROR   23:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:04:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:04:01 compute-0 openstack_network_exporter[205887]: ERROR   23:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:04:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:04:03 compute-0 nova_compute[189508]: 2025-12-01 23:04:03.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:03 compute-0 nova_compute[189508]: 2025-12-01 23:04:03.364 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:04.648 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:04.650 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:04.652 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:05 compute-0 nova_compute[189508]: 2025-12-01 23:04:05.754 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.245 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.246 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.247 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.248 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.349 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.433 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.435 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.501 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.950 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.952 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5116MB free_disk=72.09565353393555GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.953 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:06 compute-0 nova_compute[189508]: 2025-12-01 23:04:06.953 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:07 compute-0 nova_compute[189508]: 2025-12-01 23:04:07.042 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:04:07 compute-0 nova_compute[189508]: 2025-12-01 23:04:07.043 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:04:07 compute-0 nova_compute[189508]: 2025-12-01 23:04:07.043 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:04:07 compute-0 nova_compute[189508]: 2025-12-01 23:04:07.097 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:04:07 compute-0 nova_compute[189508]: 2025-12-01 23:04:07.114 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:04:07 compute-0 nova_compute[189508]: 2025-12-01 23:04:07.116 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:04:07 compute-0 nova_compute[189508]: 2025-12-01 23:04:07.117 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:08 compute-0 nova_compute[189508]: 2025-12-01 23:04:08.366 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:10 compute-0 nova_compute[189508]: 2025-12-01 23:04:10.759 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:10 compute-0 podman[256110]: 2025-12-01 23:04:10.840972605 +0000 UTC m=+0.118457510 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:04:13 compute-0 nova_compute[189508]: 2025-12-01 23:04:13.369 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:13 compute-0 podman[256134]: 2025-12-01 23:04:13.809422555 +0000 UTC m=+0.093941184 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:04:13 compute-0 podman[256135]: 2025-12-01 23:04:13.855952765 +0000 UTC m=+0.128220737 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec  1 23:04:15 compute-0 nova_compute[189508]: 2025-12-01 23:04:15.759 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:18 compute-0 nova_compute[189508]: 2025-12-01 23:04:18.371 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:20 compute-0 nova_compute[189508]: 2025-12-01 23:04:20.763 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:20 compute-0 podman[256175]: 2025-12-01 23:04:20.813045082 +0000 UTC m=+0.083198969 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 23:04:20 compute-0 podman[256174]: 2025-12-01 23:04:20.849676161 +0000 UTC m=+0.130325525 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  1 23:04:23 compute-0 nova_compute[189508]: 2025-12-01 23:04:23.372 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:25 compute-0 nova_compute[189508]: 2025-12-01 23:04:25.764 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:26 compute-0 podman[256217]: 2025-12-01 23:04:26.800283261 +0000 UTC m=+0.083956262 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:04:26 compute-0 podman[256225]: 2025-12-01 23:04:26.827159793 +0000 UTC m=+0.086522455 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9, version=9.4, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 23:04:26 compute-0 podman[256218]: 2025-12-01 23:04:26.850859685 +0000 UTC m=+0.119532341 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 23:04:26 compute-0 podman[256219]: 2025-12-01 23:04:26.861776334 +0000 UTC m=+0.124528742 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 23:04:28 compute-0 nova_compute[189508]: 2025-12-01 23:04:28.375 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:29 compute-0 podman[203693]: time="2025-12-01T23:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:04:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:04:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  1 23:04:30 compute-0 nova_compute[189508]: 2025-12-01 23:04:30.769 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:31 compute-0 openstack_network_exporter[205887]: ERROR   23:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:04:31 compute-0 openstack_network_exporter[205887]: ERROR   23:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:04:31 compute-0 openstack_network_exporter[205887]: ERROR   23:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:04:31 compute-0 openstack_network_exporter[205887]: ERROR   23:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:04:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:04:31 compute-0 openstack_network_exporter[205887]: ERROR   23:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:04:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:04:33 compute-0 nova_compute[189508]: 2025-12-01 23:04:33.380 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:35 compute-0 nova_compute[189508]: 2025-12-01 23:04:35.772 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:38 compute-0 nova_compute[189508]: 2025-12-01 23:04:38.383 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:40 compute-0 nova_compute[189508]: 2025-12-01 23:04:40.773 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:41 compute-0 podman[256294]: 2025-12-01 23:04:41.869758514 +0000 UTC m=+0.134526205 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:04:43 compute-0 nova_compute[189508]: 2025-12-01 23:04:43.385 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:44 compute-0 podman[256318]: 2025-12-01 23:04:44.76400643 +0000 UTC m=+0.072392804 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 23:04:44 compute-0 podman[256319]: 2025-12-01 23:04:44.794992419 +0000 UTC m=+0.101791888 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.167 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.170 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.200 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.326 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.327 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.341 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.342 189512 INFO nova.compute.claims [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.491 189512 DEBUG nova.compute.provider_tree [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.512 189512 DEBUG nova.scheduler.client.report [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.540 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.540 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.595 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.595 189512 DEBUG nova.network.neutron [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.625 189512 INFO nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.647 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.742 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.744 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.744 189512 INFO nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Creating image(s)#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.745 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "/var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.746 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "/var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.747 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "/var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.765 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.783 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.841 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.842 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.843 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.860 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.913 189512 DEBUG nova.policy [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.929 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.930 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa,backing_fmt=raw /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.968 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa,backing_fmt=raw /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.969 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:45 compute-0 nova_compute[189508]: 2025-12-01 23:04:45.970 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.021 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/592d9bdb5a34cf6d68cb4b9eebf44466a807a2aa --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.022 189512 DEBUG nova.virt.disk.api [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Checking if we can resize image /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.023 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.077 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.079 189512 DEBUG nova.virt.disk.api [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Cannot resize image /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.081 189512 DEBUG nova.objects.instance [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lazy-loading 'migration_context' on Instance uuid 42680544-e423-4200-816c-a17b766a4339 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.101 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.101 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Ensure instance console log exists: /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.102 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.103 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.103 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:46.395 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:04:46 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:46.399 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.405 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:46 compute-0 nova_compute[189508]: 2025-12-01 23:04:46.503 189512 DEBUG nova.network.neutron [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Successfully created port: d040598e-3c6d-4c31-a052-e42d95473b17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.192 189512 DEBUG nova.network.neutron [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Successfully updated port: d040598e-3c6d-4c31-a052-e42d95473b17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.217 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.218 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquired lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.218 189512 DEBUG nova.network.neutron [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.325 189512 DEBUG nova.compute.manager [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received event network-changed-d040598e-3c6d-4c31-a052-e42d95473b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.325 189512 DEBUG nova.compute.manager [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Refreshing instance network info cache due to event network-changed-d040598e-3c6d-4c31-a052-e42d95473b17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.326 189512 DEBUG oslo_concurrency.lockutils [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:04:47 compute-0 nova_compute[189508]: 2025-12-01 23:04:47.372 189512 DEBUG nova.network.neutron [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 23:04:48 compute-0 nova_compute[189508]: 2025-12-01 23:04:48.113 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:48 compute-0 nova_compute[189508]: 2025-12-01 23:04:48.387 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.801 189512 DEBUG nova.network.neutron [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.831 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Releasing lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.832 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Instance network_info: |[{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.833 189512 DEBUG oslo_concurrency.lockutils [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquired lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.834 189512 DEBUG nova.network.neutron [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Refreshing network info cache for port d040598e-3c6d-4c31-a052-e42d95473b17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.840 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Start _get_guest_xml network_info=[{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T23:00:11Z,direct_url=<?>,disk_format='qcow2',id=ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793,min_disk=0,min_ram=0,name='tempest-scenario-img--67714485',owner='a0bc498794944fb4bfd74d85d99d70b2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T23:00:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.851 189512 WARNING nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.860 189512 DEBUG nova.virt.libvirt.host [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.862 189512 DEBUG nova.virt.libvirt.host [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.875 189512 DEBUG nova.virt.libvirt.host [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.876 189512 DEBUG nova.virt.libvirt.host [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.877 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.878 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T22:55:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2e42a55e-71e2-4041-8ca2-725d63f058bf',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T23:00:11Z,direct_url=<?>,disk_format='qcow2',id=ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793,min_disk=0,min_ram=0,name='tempest-scenario-img--67714485',owner='a0bc498794944fb4bfd74d85d99d70b2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T23:00:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.880 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.881 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.882 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.883 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.883 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.884 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.885 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.887 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.888 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.888 189512 DEBUG nova.virt.hardware [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.895 189512 DEBUG nova.virt.libvirt.vif [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T23:04:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r',id=15,image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='3dac0f46-9f79-460b-b6c5-9876493d569a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0bc498794944fb4bfd74d85d99d70b2',ramdisk_id='',reservation_id='r-o1y4t3q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2049243380',owner_user_name='tempest-PrometheusGabbiTest-2049243380-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T23:04:45Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='31117d25a4e94964a6d197de21b13cbe',uuid=42680544-e423-4200-816c-a17b766a4339,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.896 189512 DEBUG nova.network.os_vif_util [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converting VIF {"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.898 189512 DEBUG nova.network.os_vif_util [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:8f:04,bridge_name='br-int',has_traffic_filtering=True,id=d040598e-3c6d-4c31-a052-e42d95473b17,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd040598e-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.900 189512 DEBUG nova.objects.instance [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42680544-e423-4200-816c-a17b766a4339 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.915 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] End _get_guest_xml xml=<domain type="kvm">
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <uuid>42680544-e423-4200-816c-a17b766a4339</uuid>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <name>instance-0000000f</name>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <memory>131072</memory>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <vcpu>1</vcpu>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <metadata>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <nova:name>te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r</nova:name>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <nova:creationTime>2025-12-01 23:04:49</nova:creationTime>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <nova:flavor name="m1.nano">
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:memory>128</nova:memory>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:disk>1</nova:disk>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:swap>0</nova:swap>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:vcpus>1</nova:vcpus>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      </nova:flavor>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <nova:owner>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:user uuid="31117d25a4e94964a6d197de21b13cbe">tempest-PrometheusGabbiTest-2049243380-project-member</nova:user>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:project uuid="a0bc498794944fb4bfd74d85d99d70b2">tempest-PrometheusGabbiTest-2049243380</nova:project>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      </nova:owner>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <nova:root type="image" uuid="ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <nova:ports>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        <nova:port uuid="d040598e-3c6d-4c31-a052-e42d95473b17">
Dec  1 23:04:49 compute-0 nova_compute[189508]:          <nova:ip type="fixed" address="10.100.2.30" ipVersion="4"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:        </nova:port>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      </nova:ports>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </nova:instance>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  </metadata>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <sysinfo type="smbios">
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <system>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <entry name="manufacturer">RDO</entry>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <entry name="product">OpenStack Compute</entry>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <entry name="serial">42680544-e423-4200-816c-a17b766a4339</entry>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <entry name="uuid">42680544-e423-4200-816c-a17b766a4339</entry>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <entry name="family">Virtual Machine</entry>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </system>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  </sysinfo>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <os>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <boot dev="hd"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <smbios mode="sysinfo"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  </os>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <features>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <acpi/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <apic/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <vmcoreinfo/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  </features>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <clock offset="utc">
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <timer name="hpet" present="no"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  </clock>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <cpu mode="host-model" match="exact">
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  </cpu>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  <devices>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <disk type="file" device="disk">
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <target dev="vda" bus="virtio"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </disk>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <disk type="file" device="cdrom">
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <source file="/var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk.config"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <target dev="sda" bus="sata"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </disk>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <interface type="ethernet">
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <mac address="fa:16:3e:90:8f:04"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <mtu size="1442"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <target dev="tapd040598e-3c"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </interface>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <serial type="pty">
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <log file="/var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/console.log" append="off"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </serial>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <video>
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <model type="virtio"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </video>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <input type="tablet" bus="usb"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <rng model="virtio">
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <backend model="random">/dev/urandom</backend>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </rng>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <controller type="usb" index="0"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    <memballoon model="virtio">
Dec  1 23:04:49 compute-0 nova_compute[189508]:      <stats period="10"/>
Dec  1 23:04:49 compute-0 nova_compute[189508]:    </memballoon>
Dec  1 23:04:49 compute-0 nova_compute[189508]:  </devices>
Dec  1 23:04:49 compute-0 nova_compute[189508]: </domain>
Dec  1 23:04:49 compute-0 nova_compute[189508]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.923 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Preparing to wait for external event network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.924 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.925 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.925 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.926 189512 DEBUG nova.virt.libvirt.vif [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T23:04:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r',id=15,image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='3dac0f46-9f79-460b-b6c5-9876493d569a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0bc498794944fb4bfd74d85d99d70b2',ramdisk_id='',reservation_id='r-o1y4t3q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-2049243380',owner_user_name='tempest-PrometheusGabbiTest-2049243380-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T23:04:45Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='31117d25a4e94964a6d197de21b13cbe',uuid=42680544-e423-4200-816c-a17b766a4339,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.926 189512 DEBUG nova.network.os_vif_util [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converting VIF {"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.927 189512 DEBUG nova.network.os_vif_util [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:8f:04,bridge_name='br-int',has_traffic_filtering=True,id=d040598e-3c6d-4c31-a052-e42d95473b17,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd040598e-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.928 189512 DEBUG os_vif [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:8f:04,bridge_name='br-int',has_traffic_filtering=True,id=d040598e-3c6d-4c31-a052-e42d95473b17,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd040598e-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.928 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.929 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.930 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.933 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.934 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd040598e-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.935 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd040598e-3c, col_values=(('external_ids', {'iface-id': 'd040598e-3c6d-4c31-a052-e42d95473b17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:8f:04', 'vm-uuid': '42680544-e423-4200-816c-a17b766a4339'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:04:49 compute-0 NetworkManager[56278]: <info>  [1764630289.9377] manager: (tapd040598e-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.942 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.946 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:49 compute-0 nova_compute[189508]: 2025-12-01 23:04:49.948 189512 INFO os_vif [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:8f:04,bridge_name='br-int',has_traffic_filtering=True,id=d040598e-3c6d-4c31-a052-e42d95473b17,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd040598e-3c')#033[00m
Dec  1 23:04:50 compute-0 nova_compute[189508]: 2025-12-01 23:04:50.013 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 23:04:50 compute-0 nova_compute[189508]: 2025-12-01 23:04:50.014 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 23:04:50 compute-0 nova_compute[189508]: 2025-12-01 23:04:50.015 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] No VIF found with MAC fa:16:3e:90:8f:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 23:04:50 compute-0 nova_compute[189508]: 2025-12-01 23:04:50.016 189512 INFO nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Using config drive#033[00m
Dec  1 23:04:50 compute-0 nova_compute[189508]: 2025-12-01 23:04:50.777 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:50 compute-0 nova_compute[189508]: 2025-12-01 23:04:50.981 189512 INFO nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Creating config drive at /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk.config#033[00m
Dec  1 23:04:50 compute-0 nova_compute[189508]: 2025-12-01 23:04:50.995 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfx8h3t9i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.146 189512 DEBUG oslo_concurrency.processutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfx8h3t9i" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:04:51 compute-0 kernel: tapd040598e-3c: entered promiscuous mode
Dec  1 23:04:51 compute-0 NetworkManager[56278]: <info>  [1764630291.2653] manager: (tapd040598e-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Dec  1 23:04:51 compute-0 ovn_controller[97770]: 2025-12-01T23:04:51Z|00168|binding|INFO|Claiming lport d040598e-3c6d-4c31-a052-e42d95473b17 for this chassis.
Dec  1 23:04:51 compute-0 ovn_controller[97770]: 2025-12-01T23:04:51Z|00169|binding|INFO|d040598e-3c6d-4c31-a052-e42d95473b17: Claiming fa:16:3e:90:8f:04 10.100.2.30
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.280 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.293 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.289 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:8f:04 10.100.2.30'], port_security=['fa:16:3e:90:8f:04 10.100.2.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.30/16', 'neutron:device_id': '42680544-e423-4200-816c-a17b766a4339', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76005ead-26ac-4245-b45f-b052ffa2d506', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1db1c83-5a48-462b-b1b5-4f849ee50fec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39384b3e-eb99-4e89-ab68-0d8f0f8766e1, chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=d040598e-3c6d-4c31-a052-e42d95473b17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.290 106662 INFO neutron.agent.ovn.metadata.agent [-] Port d040598e-3c6d-4c31-a052-e42d95473b17 in datapath 76005ead-26ac-4245-b45f-b052ffa2d506 bound to our chassis#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.292 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76005ead-26ac-4245-b45f-b052ffa2d506#033[00m
Dec  1 23:04:51 compute-0 ovn_controller[97770]: 2025-12-01T23:04:51Z|00170|binding|INFO|Setting lport d040598e-3c6d-4c31-a052-e42d95473b17 ovn-installed in OVS
Dec  1 23:04:51 compute-0 ovn_controller[97770]: 2025-12-01T23:04:51Z|00171|binding|INFO|Setting lport d040598e-3c6d-4c31-a052-e42d95473b17 up in Southbound
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.300 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.302 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.318 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[9c664446-4600-4202-b846-7b9c4cfc36e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:04:51 compute-0 systemd-udevd[256419]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 23:04:51 compute-0 systemd-machined[155759]: New machine qemu-16-instance-0000000f.
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.349 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[641b9dce-d6f6-46c6-8cec-6ba7ceb68dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:04:51 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec  1 23:04:51 compute-0 NetworkManager[56278]: <info>  [1764630291.3537] device (tapd040598e-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 23:04:51 compute-0 NetworkManager[56278]: <info>  [1764630291.3545] device (tapd040598e-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.356 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab6af16-c6e9-4e98-8b91-492a85ec0855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:04:51 compute-0 podman[256385]: 2025-12-01 23:04:51.371402362 +0000 UTC m=+0.114024684 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.389 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[928832f8-da3a-457b-9c34-9ef55f4bfc13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.410 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[9d307795-4f07-4c9e-ae91-a896d5e6fd2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76005ead-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:7d:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553339, 'reachable_time': 16374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256440, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:04:51 compute-0 podman[256383]: 2025-12-01 23:04:51.432814523 +0000 UTC m=+0.181017843 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.445 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[66cee912-9a0c-4742-90d0-e1120807265c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76005ead-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553353, 'tstamp': 553353}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256447, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap76005ead-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553356, 'tstamp': 553356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256447, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.447 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76005ead-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.448 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.449 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.450 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76005ead-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.450 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.451 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76005ead-20, col_values=(('external_ids', {'iface-id': '6cd00ec7-5de6-4094-b01c-8ff2beea0431'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:04:51 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:51.451 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.931 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764630291.9310536, 42680544-e423-4200-816c-a17b766a4339 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.932 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] VM Started (Lifecycle Event)#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.971 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.977 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764630291.9311872, 42680544-e423-4200-816c-a17b766a4339 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.978 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] VM Paused (Lifecycle Event)#033[00m
Dec  1 23:04:51 compute-0 nova_compute[189508]: 2025-12-01 23:04:51.996 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.002 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.026 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.160 189512 DEBUG nova.compute.manager [req-7a8f799f-fba1-402c-a590-144a34b03492 req-013f2d41-9596-4470-a6c5-192b5d9b0a26 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received event network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.161 189512 DEBUG oslo_concurrency.lockutils [req-7a8f799f-fba1-402c-a590-144a34b03492 req-013f2d41-9596-4470-a6c5-192b5d9b0a26 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.162 189512 DEBUG oslo_concurrency.lockutils [req-7a8f799f-fba1-402c-a590-144a34b03492 req-013f2d41-9596-4470-a6c5-192b5d9b0a26 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.163 189512 DEBUG oslo_concurrency.lockutils [req-7a8f799f-fba1-402c-a590-144a34b03492 req-013f2d41-9596-4470-a6c5-192b5d9b0a26 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.163 189512 DEBUG nova.compute.manager [req-7a8f799f-fba1-402c-a590-144a34b03492 req-013f2d41-9596-4470-a6c5-192b5d9b0a26 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Processing event network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.165 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.182 189512 DEBUG nova.virt.driver [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] Emitting event <LifecycleEvent: 1764630292.1695747, 42680544-e423-4200-816c-a17b766a4339 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.183 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] VM Resumed (Lifecycle Event)#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.194 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.207 189512 INFO nova.virt.libvirt.driver [-] [instance: 42680544-e423-4200-816c-a17b766a4339] Instance spawned successfully.#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.208 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.228 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.235 189512 DEBUG nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.238 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.239 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.239 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.240 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.240 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.241 189512 DEBUG nova.virt.libvirt.driver [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.274 189512 INFO nova.compute.manager [None req-0af85878-ec42-43fd-acd2-646f8ef97499 - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.302 189512 INFO nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Took 6.56 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.302 189512 DEBUG nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.623 189512 INFO nova.compute.manager [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Took 7.34 seconds to build instance.#033[00m
Dec  1 23:04:52 compute-0 nova_compute[189508]: 2025-12-01 23:04:52.664 189512 DEBUG oslo_concurrency.lockutils [None req-afb3f79f-b426-4f0b-a390-da7f6c1ea960 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:53 compute-0 nova_compute[189508]: 2025-12-01 23:04:53.080 189512 DEBUG nova.network.neutron [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Updated VIF entry in instance network info cache for port d040598e-3c6d-4c31-a052-e42d95473b17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 23:04:53 compute-0 nova_compute[189508]: 2025-12-01 23:04:53.081 189512 DEBUG nova.network.neutron [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:04:53 compute-0 nova_compute[189508]: 2025-12-01 23:04:53.100 189512 DEBUG oslo_concurrency.lockutils [req-50cedffb-18eb-40cd-b320-60fb7494b3c4 req-2169366c-b87e-4dfe-ad12-b41281350871 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Releasing lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:04:53 compute-0 nova_compute[189508]: 2025-12-01 23:04:53.193 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:04:54 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 23:04:54 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 23:04:54 compute-0 nova_compute[189508]: 2025-12-01 23:04:54.279 189512 DEBUG nova.compute.manager [req-73bc6312-370c-498e-b852-c8fbd0e8e1cd req-893d031a-b66f-4c55-9e82-e04cc9e21fd0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received event network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:04:54 compute-0 nova_compute[189508]: 2025-12-01 23:04:54.281 189512 DEBUG oslo_concurrency.lockutils [req-73bc6312-370c-498e-b852-c8fbd0e8e1cd req-893d031a-b66f-4c55-9e82-e04cc9e21fd0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:04:54 compute-0 nova_compute[189508]: 2025-12-01 23:04:54.281 189512 DEBUG oslo_concurrency.lockutils [req-73bc6312-370c-498e-b852-c8fbd0e8e1cd req-893d031a-b66f-4c55-9e82-e04cc9e21fd0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:04:54 compute-0 nova_compute[189508]: 2025-12-01 23:04:54.282 189512 DEBUG oslo_concurrency.lockutils [req-73bc6312-370c-498e-b852-c8fbd0e8e1cd req-893d031a-b66f-4c55-9e82-e04cc9e21fd0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:04:54 compute-0 nova_compute[189508]: 2025-12-01 23:04:54.282 189512 DEBUG nova.compute.manager [req-73bc6312-370c-498e-b852-c8fbd0e8e1cd req-893d031a-b66f-4c55-9e82-e04cc9e21fd0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] No waiting events found dispatching network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:04:54 compute-0 nova_compute[189508]: 2025-12-01 23:04:54.283 189512 WARNING nova.compute.manager [req-73bc6312-370c-498e-b852-c8fbd0e8e1cd req-893d031a-b66f-4c55-9e82-e04cc9e21fd0 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received unexpected event network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 for instance with vm_state active and task_state None.#033[00m
Dec  1 23:04:54 compute-0 nova_compute[189508]: 2025-12-01 23:04:54.939 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:55 compute-0 nova_compute[189508]: 2025-12-01 23:04:55.780 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:04:56 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:04:56.402 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:04:57 compute-0 podman[256477]: 2025-12-01 23:04:57.842249789 +0000 UTC m=+0.113623343 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:04:57 compute-0 podman[256478]: 2025-12-01 23:04:57.84296462 +0000 UTC m=+0.100864312 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal)
Dec  1 23:04:57 compute-0 podman[256476]: 2025-12-01 23:04:57.852029227 +0000 UTC m=+0.126493788 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:04:57 compute-0 podman[256479]: 2025-12-01 23:04:57.862662938 +0000 UTC m=+0.110713640 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, release=1214.1726694543)
Dec  1 23:04:59 compute-0 podman[203693]: time="2025-12-01T23:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:04:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:04:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  1 23:04:59 compute-0 nova_compute[189508]: 2025-12-01 23:04:59.944 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.759 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.760 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.761 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.762 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:05:00 compute-0 nova_compute[189508]: 2025-12-01 23:05:00.783 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:01 compute-0 openstack_network_exporter[205887]: ERROR   23:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:05:01 compute-0 openstack_network_exporter[205887]: ERROR   23:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:05:01 compute-0 openstack_network_exporter[205887]: ERROR   23:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:05:01 compute-0 openstack_network_exporter[205887]: ERROR   23:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:05:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:05:01 compute-0 openstack_network_exporter[205887]: ERROR   23:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:05:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.318 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.343 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.344 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.344 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.345 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.345 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.345 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:02 compute-0 nova_compute[189508]: 2025-12-01 23:05:02.346 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:05:03 compute-0 nova_compute[189508]: 2025-12-01 23:05:03.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:03 compute-0 nova_compute[189508]: 2025-12-01 23:05:03.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:05:04.650 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:05:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:05:04.651 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:05:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:05:04.652 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:05:04 compute-0 nova_compute[189508]: 2025-12-01 23:05:04.949 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:05 compute-0 nova_compute[189508]: 2025-12-01 23:05:05.786 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.258 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.259 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.260 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.261 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.380 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.457 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.464 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.527 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.535 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.601 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.603 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:05:08 compute-0 nova_compute[189508]: 2025-12-01 23:05:08.671 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.050 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.053 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4991MB free_disk=72.09487915039062GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.054 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.055 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.140 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.141 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.141 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.142 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.228 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.246 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.264 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.265 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:05:09 compute-0 nova_compute[189508]: 2025-12-01 23:05:09.953 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:10 compute-0 nova_compute[189508]: 2025-12-01 23:05:10.788 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:12 compute-0 podman[256568]: 2025-12-01 23:05:12.837886798 +0000 UTC m=+0.113925602 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:05:14 compute-0 nova_compute[189508]: 2025-12-01 23:05:14.959 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:15 compute-0 nova_compute[189508]: 2025-12-01 23:05:15.792 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:15 compute-0 podman[256591]: 2025-12-01 23:05:15.833544313 +0000 UTC m=+0.109910618 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec  1 23:05:15 compute-0 podman[256590]: 2025-12-01 23:05:15.846050218 +0000 UTC m=+0.130603695 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 23:05:19 compute-0 nova_compute[189508]: 2025-12-01 23:05:19.965 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:20 compute-0 nova_compute[189508]: 2025-12-01 23:05:20.794 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:21 compute-0 ovn_controller[97770]: 2025-12-01T23:05:21Z|00172|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Dec  1 23:05:21 compute-0 podman[256628]: 2025-12-01 23:05:21.837519582 +0000 UTC m=+0.118589944 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 23:05:21 compute-0 podman[256629]: 2025-12-01 23:05:21.853847135 +0000 UTC m=+0.108812486 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 23:05:24 compute-0 nova_compute[189508]: 2025-12-01 23:05:24.972 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:25 compute-0 nova_compute[189508]: 2025-12-01 23:05:25.798 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:26 compute-0 ovn_controller[97770]: 2025-12-01T23:05:26Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:8f:04 10.100.2.30
Dec  1 23:05:26 compute-0 ovn_controller[97770]: 2025-12-01T23:05:26Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:8f:04 10.100.2.30
Dec  1 23:05:28 compute-0 podman[256687]: 2025-12-01 23:05:28.834243892 +0000 UTC m=+0.107198829 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:05:28 compute-0 podman[256688]: 2025-12-01 23:05:28.847407246 +0000 UTC m=+0.113642623 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  1 23:05:28 compute-0 podman[256689]: 2025-12-01 23:05:28.850626067 +0000 UTC m=+0.111306976 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec  1 23:05:28 compute-0 podman[256690]: 2025-12-01 23:05:28.881156293 +0000 UTC m=+0.134526095 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, architecture=x86_64, container_name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  1 23:05:29 compute-0 podman[203693]: time="2025-12-01T23:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:05:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:05:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  1 23:05:29 compute-0 nova_compute[189508]: 2025-12-01 23:05:29.978 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:30 compute-0 nova_compute[189508]: 2025-12-01 23:05:30.800 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:31 compute-0 openstack_network_exporter[205887]: ERROR   23:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:05:31 compute-0 openstack_network_exporter[205887]: ERROR   23:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:05:31 compute-0 openstack_network_exporter[205887]: ERROR   23:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:05:31 compute-0 openstack_network_exporter[205887]: ERROR   23:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:05:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:05:31 compute-0 openstack_network_exporter[205887]: ERROR   23:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:05:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:05:34 compute-0 nova_compute[189508]: 2025-12-01 23:05:34.982 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.277 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.277 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.277 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.286 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'name': 'te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.290 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 42680544-e423-4200-816c-a17b766a4339 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.292 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/42680544-e423-4200-816c-a17b766a4339 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}82f68aee2d35afc7725a847ea4300457258faf9d3b47fbdf3a1dc69f53294b24" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 23:05:35 compute-0 nova_compute[189508]: 2025-12-01 23:05:35.802 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.874 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Mon, 01 Dec 2025 23:05:35 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-51211dbc-9c19-404b-a588-d40e64319f56 x-openstack-request-id: req-51211dbc-9c19-404b-a588-d40e64319f56 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.874 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "42680544-e423-4200-816c-a17b766a4339", "name": "te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r", "status": "ACTIVE", "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "user_id": "31117d25a4e94964a6d197de21b13cbe", "metadata": {"metering.server_group": "3dac0f46-9f79-460b-b6c5-9876493d569a"}, "hostId": "6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90", "image": {"id": "ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793"}]}, "flavor": {"id": "2e42a55e-71e2-4041-8ca2-725d63f058bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/2e42a55e-71e2-4041-8ca2-725d63f058bf"}]}, "created": "2025-12-01T23:04:44Z", "updated": "2025-12-01T23:04:52Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.30", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:90:8f:04"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/42680544-e423-4200-816c-a17b766a4339"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/42680544-e423-4200-816c-a17b766a4339"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T23:04:52.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.874 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/42680544-e423-4200-816c-a17b766a4339 used request id req-51211dbc-9c19-404b-a588-d40e64319f56 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.875 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '42680544-e423-4200-816c-a17b766a4339', 'name': 'te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.876 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T23:05:35.876497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.881 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.885 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 42680544-e423-4200-816c-a17b766a4339 / tapd040598e-3c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.885 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.886 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.886 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.887 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.887 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.887 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.887 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.888 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.888 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.888 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.889 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T23:05:35.886662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T23:05:35.888006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T23:05:35.889597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.915 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.915 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.935 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.935 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.936 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T23:05:35.936773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.988 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:35.988 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.029 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 29572096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.030 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.030 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.031 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 683363039 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.031 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 52138549 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.031 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 584056585 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.032 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 66184682 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.032 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.033 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.033 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.034 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.034 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.034 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T23:05:36.031204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T23:05:36.033262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T23:05:36.035151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.035 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 1062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.036 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.036 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.037 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.037 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.037 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.037 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.038 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.038 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 72867840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.039 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.039 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 72777728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.039 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.040 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.040 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 3988333589 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.041 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.041 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 6571184510 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.041 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T23:05:36.036927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.042 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T23:05:36.038760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T23:05:36.040574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T23:05:36.042358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.062 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/cpu volume: 308000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.083 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/cpu volume: 42150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.084 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.085 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.085 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.086 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.086 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.087 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.087 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.088 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.088 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.088 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 305 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.089 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.090 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.090 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r>]
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.090 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T23:05:36.084908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T23:05:36.086680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T23:05:36.088055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.092 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T23:05:36.090090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.092 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.092 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.093 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.094 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.095 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.095 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T23:05:36.091209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T23:05:36.092432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T23:05:36.093853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T23:05:36.095154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.096 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.097 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T23:05:36.096597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.098 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.098 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.098 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.099 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T23:05:36.098433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.100 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.100 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.100 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T23:05:36.100114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.100 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/memory.usage volume: 43.69921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T23:05:36.101609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.102 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/memory.usage volume: 43.515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.102 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T23:05:36.103104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r>]
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.104 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.104 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T23:05:36.104129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:36 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:05:36.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:05:39 compute-0 nova_compute[189508]: 2025-12-01 23:05:39.988 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:40 compute-0 nova_compute[189508]: 2025-12-01 23:05:40.805 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:43 compute-0 podman[256763]: 2025-12-01 23:05:43.845536275 +0000 UTC m=+0.118033078 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:05:44 compute-0 nova_compute[189508]: 2025-12-01 23:05:44.994 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:45 compute-0 nova_compute[189508]: 2025-12-01 23:05:45.808 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:46 compute-0 podman[256787]: 2025-12-01 23:05:46.837333081 +0000 UTC m=+0.107992304 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 23:05:46 compute-0 podman[256786]: 2025-12-01 23:05:46.860720374 +0000 UTC m=+0.131011966 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 23:05:49 compute-0 nova_compute[189508]: 2025-12-01 23:05:49.260 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:05:50 compute-0 nova_compute[189508]: 2025-12-01 23:05:49.999 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:50 compute-0 nova_compute[189508]: 2025-12-01 23:05:50.811 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:52 compute-0 podman[256826]: 2025-12-01 23:05:52.829684391 +0000 UTC m=+0.094261134 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 23:05:52 compute-0 podman[256825]: 2025-12-01 23:05:52.899814699 +0000 UTC m=+0.169298021 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 23:05:55 compute-0 nova_compute[189508]: 2025-12-01 23:05:55.005 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:55 compute-0 nova_compute[189508]: 2025-12-01 23:05:55.816 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:05:59 compute-0 podman[203693]: time="2025-12-01T23:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:05:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:05:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 23:05:59 compute-0 podman[256869]: 2025-12-01 23:05:59.832130653 +0000 UTC m=+0.096277241 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:05:59 compute-0 podman[256871]: 2025-12-01 23:05:59.84436896 +0000 UTC m=+0.110580116 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, release=1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:05:59 compute-0 podman[256868]: 2025-12-01 23:05:59.847068707 +0000 UTC m=+0.124888223 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:05:59 compute-0 podman[256870]: 2025-12-01 23:05:59.859920111 +0000 UTC m=+0.126365534 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec  1 23:06:00 compute-0 nova_compute[189508]: 2025-12-01 23:06:00.012 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:00 compute-0 nova_compute[189508]: 2025-12-01 23:06:00.819 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:01 compute-0 nova_compute[189508]: 2025-12-01 23:06:01.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:01 compute-0 openstack_network_exporter[205887]: ERROR   23:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:06:01 compute-0 openstack_network_exporter[205887]: ERROR   23:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:06:01 compute-0 openstack_network_exporter[205887]: ERROR   23:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:06:01 compute-0 openstack_network_exporter[205887]: ERROR   23:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:06:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:06:01 compute-0 openstack_network_exporter[205887]: ERROR   23:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:06:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:06:02 compute-0 nova_compute[189508]: 2025-12-01 23:06:02.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:02 compute-0 nova_compute[189508]: 2025-12-01 23:06:02.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:06:02 compute-0 nova_compute[189508]: 2025-12-01 23:06:02.742 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:06:02 compute-0 nova_compute[189508]: 2025-12-01 23:06:02.744 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:06:02 compute-0 nova_compute[189508]: 2025-12-01 23:06:02.744 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.135 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.157 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.158 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.158 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.159 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.159 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:04 compute-0 nova_compute[189508]: 2025-12-01 23:06:04.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:06:04.651 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:06:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:06:04.652 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:06:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:06:04.653 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:06:05 compute-0 nova_compute[189508]: 2025-12-01 23:06:05.025 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:05 compute-0 nova_compute[189508]: 2025-12-01 23:06:05.823 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.240 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.240 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.240 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.329 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.411 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.413 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.493 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.505 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.584 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.585 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:06:08 compute-0 nova_compute[189508]: 2025-12-01 23:06:08.668 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.039 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.041 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4965MB free_disk=72.0665283203125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.041 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.042 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.143 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.143 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.144 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.144 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.238 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.257 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.258 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:06:09 compute-0 nova_compute[189508]: 2025-12-01 23:06:09.259 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:06:10 compute-0 nova_compute[189508]: 2025-12-01 23:06:10.031 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:10 compute-0 nova_compute[189508]: 2025-12-01 23:06:10.825 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:14 compute-0 podman[256960]: 2025-12-01 23:06:14.798111251 +0000 UTC m=+0.086524445 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:06:15 compute-0 nova_compute[189508]: 2025-12-01 23:06:15.037 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:15 compute-0 nova_compute[189508]: 2025-12-01 23:06:15.828 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:17 compute-0 podman[256982]: 2025-12-01 23:06:17.820476753 +0000 UTC m=+0.090714403 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 23:06:17 compute-0 podman[256983]: 2025-12-01 23:06:17.831623779 +0000 UTC m=+0.088855000 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 23:06:20 compute-0 nova_compute[189508]: 2025-12-01 23:06:20.046 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:20 compute-0 nova_compute[189508]: 2025-12-01 23:06:20.832 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:23 compute-0 podman[257025]: 2025-12-01 23:06:23.805240128 +0000 UTC m=+0.089083407 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 23:06:23 compute-0 podman[257024]: 2025-12-01 23:06:23.813729868 +0000 UTC m=+0.101912240 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 23:06:25 compute-0 nova_compute[189508]: 2025-12-01 23:06:25.051 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:25 compute-0 nova_compute[189508]: 2025-12-01 23:06:25.837 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:29 compute-0 podman[203693]: time="2025-12-01T23:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:06:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:06:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  1 23:06:30 compute-0 nova_compute[189508]: 2025-12-01 23:06:30.055 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:30 compute-0 podman[257076]: 2025-12-01 23:06:30.81862595 +0000 UTC m=+0.069583834 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 23:06:30 compute-0 podman[257077]: 2025-12-01 23:06:30.837573017 +0000 UTC m=+0.081322567 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec  1 23:06:30 compute-0 nova_compute[189508]: 2025-12-01 23:06:30.837 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:30 compute-0 podman[257075]: 2025-12-01 23:06:30.845040049 +0000 UTC m=+0.105066100 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:06:30 compute-0 podman[257078]: 2025-12-01 23:06:30.867072284 +0000 UTC m=+0.103599249 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, io.openshift.expose-services=)
Dec  1 23:06:31 compute-0 openstack_network_exporter[205887]: ERROR   23:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:06:31 compute-0 openstack_network_exporter[205887]: ERROR   23:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:06:31 compute-0 openstack_network_exporter[205887]: ERROR   23:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:06:31 compute-0 openstack_network_exporter[205887]: ERROR   23:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:06:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:06:31 compute-0 openstack_network_exporter[205887]: ERROR   23:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:06:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:06:35 compute-0 nova_compute[189508]: 2025-12-01 23:06:35.059 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:35 compute-0 nova_compute[189508]: 2025-12-01 23:06:35.840 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:40 compute-0 nova_compute[189508]: 2025-12-01 23:06:40.064 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:40 compute-0 nova_compute[189508]: 2025-12-01 23:06:40.841 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:45 compute-0 nova_compute[189508]: 2025-12-01 23:06:45.068 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:45 compute-0 podman[257160]: 2025-12-01 23:06:45.843856968 +0000 UTC m=+0.111489222 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 23:06:45 compute-0 nova_compute[189508]: 2025-12-01 23:06:45.843 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:48 compute-0 podman[257184]: 2025-12-01 23:06:48.832492724 +0000 UTC m=+0.108542959 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:06:48 compute-0 podman[257185]: 2025-12-01 23:06:48.867143747 +0000 UTC m=+0.123341569 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 23:06:50 compute-0 nova_compute[189508]: 2025-12-01 23:06:50.073 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:50 compute-0 nova_compute[189508]: 2025-12-01 23:06:50.256 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:50 compute-0 nova_compute[189508]: 2025-12-01 23:06:50.846 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:54 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 23:06:54 compute-0 podman[257224]: 2025-12-01 23:06:54.428026302 +0000 UTC m=+0.137014986 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 23:06:54 compute-0 podman[257223]: 2025-12-01 23:06:54.503138242 +0000 UTC m=+0.221311886 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 23:06:55 compute-0 nova_compute[189508]: 2025-12-01 23:06:55.075 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:55 compute-0 nova_compute[189508]: 2025-12-01 23:06:55.849 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:06:56 compute-0 nova_compute[189508]: 2025-12-01 23:06:56.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:06:59 compute-0 podman[203693]: time="2025-12-01T23:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:06:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:06:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 23:07:00 compute-0 nova_compute[189508]: 2025-12-01 23:07:00.078 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:00 compute-0 nova_compute[189508]: 2025-12-01 23:07:00.852 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:01 compute-0 nova_compute[189508]: 2025-12-01 23:07:01.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:01 compute-0 openstack_network_exporter[205887]: ERROR   23:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:07:01 compute-0 openstack_network_exporter[205887]: ERROR   23:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:07:01 compute-0 openstack_network_exporter[205887]: ERROR   23:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:07:01 compute-0 openstack_network_exporter[205887]: ERROR   23:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:07:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:07:01 compute-0 openstack_network_exporter[205887]: ERROR   23:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:07:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:07:01 compute-0 podman[257269]: 2025-12-01 23:07:01.841758237 +0000 UTC m=+0.102349063 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public)
Dec  1 23:07:01 compute-0 podman[257268]: 2025-12-01 23:07:01.841723966 +0000 UTC m=+0.106341567 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 23:07:01 compute-0 podman[257270]: 2025-12-01 23:07:01.85104314 +0000 UTC m=+0.106633905 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, release=1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec  1 23:07:01 compute-0 podman[257267]: 2025-12-01 23:07:01.862142125 +0000 UTC m=+0.130665576 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:07:02 compute-0 nova_compute[189508]: 2025-12-01 23:07:02.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:02 compute-0 nova_compute[189508]: 2025-12-01 23:07:02.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:07:03 compute-0 nova_compute[189508]: 2025-12-01 23:07:03.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:04 compute-0 nova_compute[189508]: 2025-12-01 23:07:04.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:04 compute-0 nova_compute[189508]: 2025-12-01 23:07:04.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:07:04 compute-0 nova_compute[189508]: 2025-12-01 23:07:04.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:07:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:07:04.652 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:07:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:07:04.653 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:07:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:07:04.654 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:07:04 compute-0 nova_compute[189508]: 2025-12-01 23:07:04.802 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:07:04 compute-0 nova_compute[189508]: 2025-12-01 23:07:04.803 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:07:04 compute-0 nova_compute[189508]: 2025-12-01 23:07:04.804 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:07:04 compute-0 nova_compute[189508]: 2025-12-01 23:07:04.805 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:07:05 compute-0 nova_compute[189508]: 2025-12-01 23:07:05.081 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:05 compute-0 nova_compute[189508]: 2025-12-01 23:07:05.854 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:06 compute-0 nova_compute[189508]: 2025-12-01 23:07:06.398 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:07:06 compute-0 nova_compute[189508]: 2025-12-01 23:07:06.415 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:07:06 compute-0 nova_compute[189508]: 2025-12-01 23:07:06.416 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:07:06 compute-0 nova_compute[189508]: 2025-12-01 23:07:06.417 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:06 compute-0 nova_compute[189508]: 2025-12-01 23:07:06.418 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:06 compute-0 nova_compute[189508]: 2025-12-01 23:07:06.418 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.312 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.313 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.313 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.314 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.537 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.595 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.597 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.657 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.664 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.720 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.721 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:07:08 compute-0 nova_compute[189508]: 2025-12-01 23:07:08.785 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.089 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.091 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4958MB free_disk=72.06649398803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.092 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.092 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.497 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.498 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.498 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.499 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.565 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.663 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.665 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:07:09 compute-0 nova_compute[189508]: 2025-12-01 23:07:09.665 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:07:10 compute-0 nova_compute[189508]: 2025-12-01 23:07:10.086 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:10 compute-0 nova_compute[189508]: 2025-12-01 23:07:10.855 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:15 compute-0 nova_compute[189508]: 2025-12-01 23:07:15.089 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:15 compute-0 nova_compute[189508]: 2025-12-01 23:07:15.858 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:16 compute-0 podman[257355]: 2025-12-01 23:07:16.796614768 +0000 UTC m=+0.083646673 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:07:19 compute-0 podman[257378]: 2025-12-01 23:07:19.810684815 +0000 UTC m=+0.087465200 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 23:07:19 compute-0 podman[257379]: 2025-12-01 23:07:19.8496443 +0000 UTC m=+0.115460484 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 23:07:20 compute-0 nova_compute[189508]: 2025-12-01 23:07:20.093 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:20 compute-0 nova_compute[189508]: 2025-12-01 23:07:20.862 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:24 compute-0 podman[257413]: 2025-12-01 23:07:24.827659548 +0000 UTC m=+0.104265808 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec  1 23:07:24 compute-0 podman[257414]: 2025-12-01 23:07:24.830883909 +0000 UTC m=+0.099901124 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:07:25 compute-0 nova_compute[189508]: 2025-12-01 23:07:25.095 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:25 compute-0 nova_compute[189508]: 2025-12-01 23:07:25.864 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:29 compute-0 podman[203693]: time="2025-12-01T23:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:07:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:07:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  1 23:07:30 compute-0 nova_compute[189508]: 2025-12-01 23:07:30.099 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:30 compute-0 nova_compute[189508]: 2025-12-01 23:07:30.867 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:31 compute-0 openstack_network_exporter[205887]: ERROR   23:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:07:31 compute-0 openstack_network_exporter[205887]: ERROR   23:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:07:31 compute-0 openstack_network_exporter[205887]: ERROR   23:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:07:31 compute-0 openstack_network_exporter[205887]: ERROR   23:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:07:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:07:31 compute-0 openstack_network_exporter[205887]: ERROR   23:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:07:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:07:32 compute-0 podman[257456]: 2025-12-01 23:07:32.839940455 +0000 UTC m=+0.102881688 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 23:07:32 compute-0 podman[257457]: 2025-12-01 23:07:32.84962769 +0000 UTC m=+0.104025601 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30)
Dec  1 23:07:32 compute-0 podman[257454]: 2025-12-01 23:07:32.854417346 +0000 UTC m=+0.125468639 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:07:32 compute-0 podman[257455]: 2025-12-01 23:07:32.877642944 +0000 UTC m=+0.143379947 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 23:07:35 compute-0 nova_compute[189508]: 2025-12-01 23:07:35.105 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.278 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.278 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.278 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b01160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.288 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'name': 'te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.293 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '42680544-e423-4200-816c-a17b766a4339', 'name': 'te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.293 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T23:07:35.294695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.303 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.309 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T23:07:35.311568) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.312 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.313 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.315 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.316 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T23:07:35.315241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T23:07:35.318907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.338 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.339 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.356 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.356 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.357 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T23:07:35.357812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.413 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 30837248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.413 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.465 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 29572096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.466 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.467 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 712736138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.467 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 59986442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T23:07:35.467091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.468 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 584056585 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.468 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 66184682 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.469 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.469 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.470 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.470 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T23:07:35.469743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.471 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T23:07:35.471991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.472 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 1113 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.472 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.473 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 1062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.473 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.473 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.474 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T23:07:35.474239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.474 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.475 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.475 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.476 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.476 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 73175040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.476 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.477 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.477 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.478 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.478 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.478 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.478 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 4035457672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.478 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.479 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 6596104133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.479 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.480 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T23:07:35.476504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T23:07:35.478516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T23:07:35.480519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.503 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/cpu volume: 332440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.528 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/cpu volume: 161380000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.529 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.529 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.530 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T23:07:35.529713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.531 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.531 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.532 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.532 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.533 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.533 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.533 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.534 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T23:07:35.531697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T23:07:35.533063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.535 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.536 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.536 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.536 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.536 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.537 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T23:07:35.535660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T23:07:35.536741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.538 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T23:07:35.538614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.539 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.540 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.540 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T23:07:35.539979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.541 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.541 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.542 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T23:07:35.541706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T23:07:35.543062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.543 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.543 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.544 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.544 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.545 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.545 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.546 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/memory.usage volume: 42.34375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.546 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/memory.usage volume: 43.484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.546 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.547 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.548 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T23:07:35.544727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T23:07:35.546145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T23:07:35.547732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:07:35.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:07:35 compute-0 nova_compute[189508]: 2025-12-01 23:07:35.870 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:40 compute-0 nova_compute[189508]: 2025-12-01 23:07:40.121 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:40 compute-0 nova_compute[189508]: 2025-12-01 23:07:40.873 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:45 compute-0 nova_compute[189508]: 2025-12-01 23:07:45.127 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:45 compute-0 nova_compute[189508]: 2025-12-01 23:07:45.876 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:47 compute-0 podman[257533]: 2025-12-01 23:07:47.791638817 +0000 UTC m=+0.071984522 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:07:50 compute-0 nova_compute[189508]: 2025-12-01 23:07:50.134 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:50 compute-0 podman[257558]: 2025-12-01 23:07:50.837038003 +0000 UTC m=+0.105038590 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec  1 23:07:50 compute-0 podman[257557]: 2025-12-01 23:07:50.842027415 +0000 UTC m=+0.109396484 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Dec  1 23:07:50 compute-0 nova_compute[189508]: 2025-12-01 23:07:50.877 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:51 compute-0 nova_compute[189508]: 2025-12-01 23:07:51.661 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:07:55 compute-0 nova_compute[189508]: 2025-12-01 23:07:55.137 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:55 compute-0 podman[257595]: 2025-12-01 23:07:55.837096455 +0000 UTC m=+0.094693406 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 23:07:55 compute-0 nova_compute[189508]: 2025-12-01 23:07:55.879 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:07:55 compute-0 podman[257594]: 2025-12-01 23:07:55.887603097 +0000 UTC m=+0.153013180 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 23:07:59 compute-0 podman[203693]: time="2025-12-01T23:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:07:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:07:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 23:08:00 compute-0 nova_compute[189508]: 2025-12-01 23:08:00.143 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:00 compute-0 nova_compute[189508]: 2025-12-01 23:08:00.881 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:01 compute-0 nova_compute[189508]: 2025-12-01 23:08:01.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:01 compute-0 openstack_network_exporter[205887]: ERROR   23:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:08:01 compute-0 openstack_network_exporter[205887]: ERROR   23:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:08:01 compute-0 openstack_network_exporter[205887]: ERROR   23:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:08:01 compute-0 openstack_network_exporter[205887]: ERROR   23:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:08:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:08:01 compute-0 openstack_network_exporter[205887]: ERROR   23:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:08:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:08:03 compute-0 podman[257636]: 2025-12-01 23:08:03.867157296 +0000 UTC m=+0.122598547 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 23:08:03 compute-0 podman[257634]: 2025-12-01 23:08:03.888627185 +0000 UTC m=+0.146385682 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:08:03 compute-0 podman[257637]: 2025-12-01 23:08:03.888732628 +0000 UTC m=+0.121264419 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543)
Dec  1 23:08:03 compute-0 podman[257635]: 2025-12-01 23:08:03.910210137 +0000 UTC m=+0.157037014 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:08:04 compute-0 nova_compute[189508]: 2025-12-01 23:08:04.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:04 compute-0 nova_compute[189508]: 2025-12-01 23:08:04.198 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:08:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:08:04.654 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:08:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:08:04.655 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:08:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:08:04.656 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:08:05 compute-0 nova_compute[189508]: 2025-12-01 23:08:05.065 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:08:05 compute-0 nova_compute[189508]: 2025-12-01 23:08:05.066 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:08:05 compute-0 nova_compute[189508]: 2025-12-01 23:08:05.067 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:08:05 compute-0 nova_compute[189508]: 2025-12-01 23:08:05.149 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:05 compute-0 nova_compute[189508]: 2025-12-01 23:08:05.884 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.965 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.981 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.982 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.983 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.984 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.984 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.985 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:06 compute-0 nova_compute[189508]: 2025-12-01 23:08:06.986 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:08:07 compute-0 nova_compute[189508]: 2025-12-01 23:08:07.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.154 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.259 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.260 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.260 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.261 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.353 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.436 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.437 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.497 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.503 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.560 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.562 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.620 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.886 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.987 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.989 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=72.06571197509766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.989 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:08:10 compute-0 nova_compute[189508]: 2025-12-01 23:08:10.990 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.232 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.232 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.233 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.233 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.339 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.424 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.425 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.438 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.469 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.542 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.563 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.565 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:08:11 compute-0 nova_compute[189508]: 2025-12-01 23:08:11.566 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:08:15 compute-0 nova_compute[189508]: 2025-12-01 23:08:15.160 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:15 compute-0 nova_compute[189508]: 2025-12-01 23:08:15.890 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:16 compute-0 nova_compute[189508]: 2025-12-01 23:08:16.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:18 compute-0 nova_compute[189508]: 2025-12-01 23:08:18.210 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:18 compute-0 nova_compute[189508]: 2025-12-01 23:08:18.212 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 23:08:18 compute-0 nova_compute[189508]: 2025-12-01 23:08:18.226 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 23:08:18 compute-0 podman[257724]: 2025-12-01 23:08:18.801656542 +0000 UTC m=+0.070668925 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 23:08:20 compute-0 nova_compute[189508]: 2025-12-01 23:08:20.165 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:20 compute-0 nova_compute[189508]: 2025-12-01 23:08:20.893 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:21 compute-0 podman[257748]: 2025-12-01 23:08:21.821147323 +0000 UTC m=+0.093538404 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:08:21 compute-0 podman[257749]: 2025-12-01 23:08:21.832483864 +0000 UTC m=+0.089431967 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 23:08:25 compute-0 nova_compute[189508]: 2025-12-01 23:08:25.170 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:25 compute-0 nova_compute[189508]: 2025-12-01 23:08:25.895 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:26 compute-0 podman[257788]: 2025-12-01 23:08:26.857121514 +0000 UTC m=+0.120331023 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 23:08:26 compute-0 podman[257787]: 2025-12-01 23:08:26.902060198 +0000 UTC m=+0.168855059 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 23:08:27 compute-0 nova_compute[189508]: 2025-12-01 23:08:27.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:27 compute-0 nova_compute[189508]: 2025-12-01 23:08:27.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 23:08:29 compute-0 podman[203693]: time="2025-12-01T23:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:08:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:08:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 23:08:30 compute-0 nova_compute[189508]: 2025-12-01 23:08:30.174 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:30 compute-0 nova_compute[189508]: 2025-12-01 23:08:30.898 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:31 compute-0 openstack_network_exporter[205887]: ERROR   23:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:08:31 compute-0 openstack_network_exporter[205887]: ERROR   23:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:08:31 compute-0 openstack_network_exporter[205887]: ERROR   23:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:08:31 compute-0 openstack_network_exporter[205887]: ERROR   23:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:08:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:08:31 compute-0 openstack_network_exporter[205887]: ERROR   23:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:08:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:08:34 compute-0 podman[257831]: 2025-12-01 23:08:34.845963857 +0000 UTC m=+0.096998412 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  1 23:08:34 compute-0 podman[257830]: 2025-12-01 23:08:34.855479517 +0000 UTC m=+0.129922896 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:08:34 compute-0 podman[257835]: 2025-12-01 23:08:34.858962615 +0000 UTC m=+0.107027126 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec  1 23:08:34 compute-0 podman[257838]: 2025-12-01 23:08:34.859877391 +0000 UTC m=+0.109048903 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git)
Dec  1 23:08:35 compute-0 nova_compute[189508]: 2025-12-01 23:08:35.179 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:35 compute-0 nova_compute[189508]: 2025-12-01 23:08:35.903 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:40 compute-0 nova_compute[189508]: 2025-12-01 23:08:40.183 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:40 compute-0 nova_compute[189508]: 2025-12-01 23:08:40.907 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:45 compute-0 nova_compute[189508]: 2025-12-01 23:08:45.190 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:45 compute-0 nova_compute[189508]: 2025-12-01 23:08:45.913 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:49 compute-0 podman[257912]: 2025-12-01 23:08:49.820282869 +0000 UTC m=+0.087289646 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:08:50 compute-0 nova_compute[189508]: 2025-12-01 23:08:50.195 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:50 compute-0 nova_compute[189508]: 2025-12-01 23:08:50.920 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:52 compute-0 nova_compute[189508]: 2025-12-01 23:08:52.212 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:52 compute-0 podman[257936]: 2025-12-01 23:08:52.816124691 +0000 UTC m=+0.085157936 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  1 23:08:52 compute-0 podman[257935]: 2025-12-01 23:08:52.816427849 +0000 UTC m=+0.089243571 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Dec  1 23:08:55 compute-0 nova_compute[189508]: 2025-12-01 23:08:55.198 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:55 compute-0 nova_compute[189508]: 2025-12-01 23:08:55.923 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:08:56 compute-0 nova_compute[189508]: 2025-12-01 23:08:56.193 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:08:57 compute-0 podman[257972]: 2025-12-01 23:08:57.788628471 +0000 UTC m=+0.060075255 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec  1 23:08:57 compute-0 podman[257971]: 2025-12-01 23:08:57.821137493 +0000 UTC m=+0.096492677 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:08:59 compute-0 podman[203693]: time="2025-12-01T23:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:08:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:08:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec  1 23:09:00 compute-0 nova_compute[189508]: 2025-12-01 23:09:00.205 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:00 compute-0 nova_compute[189508]: 2025-12-01 23:09:00.931 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:01 compute-0 nova_compute[189508]: 2025-12-01 23:09:01.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:01 compute-0 openstack_network_exporter[205887]: ERROR   23:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:09:01 compute-0 openstack_network_exporter[205887]: ERROR   23:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:09:01 compute-0 openstack_network_exporter[205887]: ERROR   23:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:09:01 compute-0 openstack_network_exporter[205887]: ERROR   23:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:09:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:09:01 compute-0 openstack_network_exporter[205887]: ERROR   23:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:09:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:09:04 compute-0 nova_compute[189508]: 2025-12-01 23:09:04.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:04 compute-0 nova_compute[189508]: 2025-12-01 23:09:04.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:09:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:09:04.656 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:09:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:09:04.657 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:09:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:09:04.657 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:09:05 compute-0 nova_compute[189508]: 2025-12-01 23:09:05.210 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:05 compute-0 podman[258014]: 2025-12-01 23:09:05.830012864 +0000 UTC m=+0.101559511 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:09:05 compute-0 podman[258015]: 2025-12-01 23:09:05.833827712 +0000 UTC m=+0.101405227 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:09:05 compute-0 podman[258027]: 2025-12-01 23:09:05.853901231 +0000 UTC m=+0.097293110 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec  1 23:09:05 compute-0 podman[258016]: 2025-12-01 23:09:05.859037817 +0000 UTC m=+0.120085766 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, build-date=2025-08-20T13:12:41)
Dec  1 23:09:05 compute-0 nova_compute[189508]: 2025-12-01 23:09:05.933 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:06 compute-0 nova_compute[189508]: 2025-12-01 23:09:06.202 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:06 compute-0 nova_compute[189508]: 2025-12-01 23:09:06.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:09:06 compute-0 nova_compute[189508]: 2025-12-01 23:09:06.203 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:09:06 compute-0 nova_compute[189508]: 2025-12-01 23:09:06.990 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:09:06 compute-0 nova_compute[189508]: 2025-12-01 23:09:06.991 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:09:06 compute-0 nova_compute[189508]: 2025-12-01 23:09:06.991 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:09:06 compute-0 nova_compute[189508]: 2025-12-01 23:09:06.992 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:09:09 compute-0 nova_compute[189508]: 2025-12-01 23:09:09.372 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:09:09 compute-0 nova_compute[189508]: 2025-12-01 23:09:09.392 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:09:09 compute-0 nova_compute[189508]: 2025-12-01 23:09:09.392 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:09:09 compute-0 nova_compute[189508]: 2025-12-01 23:09:09.393 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:09 compute-0 nova_compute[189508]: 2025-12-01 23:09:09.393 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:09 compute-0 nova_compute[189508]: 2025-12-01 23:09:09.394 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:09 compute-0 nova_compute[189508]: 2025-12-01 23:09:09.394 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:10 compute-0 nova_compute[189508]: 2025-12-01 23:09:10.216 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:10 compute-0 nova_compute[189508]: 2025-12-01 23:09:10.937 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.237 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.238 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.239 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.325 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.406 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.409 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.477 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.492 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.568 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.570 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.631 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.991 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.992 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4973MB free_disk=72.06571197509766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.993 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:09:11 compute-0 nova_compute[189508]: 2025-12-01 23:09:11.993 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.087 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.088 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.088 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.088 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.162 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.176 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.178 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:09:12 compute-0 nova_compute[189508]: 2025-12-01 23:09:12.178 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:09:15 compute-0 nova_compute[189508]: 2025-12-01 23:09:15.222 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:15 compute-0 nova_compute[189508]: 2025-12-01 23:09:15.940 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:20 compute-0 nova_compute[189508]: 2025-12-01 23:09:20.229 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:20 compute-0 podman[258099]: 2025-12-01 23:09:20.78537755 +0000 UTC m=+0.068106562 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:09:20 compute-0 nova_compute[189508]: 2025-12-01 23:09:20.943 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:23 compute-0 podman[258123]: 2025-12-01 23:09:23.803252995 +0000 UTC m=+0.077212211 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 23:09:23 compute-0 podman[258124]: 2025-12-01 23:09:23.831867076 +0000 UTC m=+0.100418908 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  1 23:09:25 compute-0 nova_compute[189508]: 2025-12-01 23:09:25.234 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:25 compute-0 nova_compute[189508]: 2025-12-01 23:09:25.949 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:28 compute-0 podman[258165]: 2025-12-01 23:09:28.848267001 +0000 UTC m=+0.117896734 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 23:09:28 compute-0 podman[258164]: 2025-12-01 23:09:28.873354423 +0000 UTC m=+0.139606100 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Dec  1 23:09:29 compute-0 podman[203693]: time="2025-12-01T23:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:09:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:09:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 23:09:30 compute-0 nova_compute[189508]: 2025-12-01 23:09:30.238 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:30 compute-0 nova_compute[189508]: 2025-12-01 23:09:30.951 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:31 compute-0 openstack_network_exporter[205887]: ERROR   23:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:09:31 compute-0 openstack_network_exporter[205887]: ERROR   23:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:09:31 compute-0 openstack_network_exporter[205887]: ERROR   23:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:09:31 compute-0 openstack_network_exporter[205887]: ERROR   23:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:09:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:09:31 compute-0 openstack_network_exporter[205887]: ERROR   23:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:09:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:09:35 compute-0 nova_compute[189508]: 2025-12-01 23:09:35.241 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.278 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.279 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.279 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.289 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'name': 'te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.296 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '42680544-e423-4200-816c-a17b766a4339', 'name': 'te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b03350>] with cache [{}], pollster history [{'network.outgoing.packets': [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh>, <NovaLikeServer: te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh>, <NovaLikeServer: te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.297 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T23:09:35.298507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.303 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.308 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T23:09:35.311116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.311 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.312 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.313 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T23:09:35.314477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.315 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.315 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.316 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T23:09:35.317657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.338 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.339 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.364 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.364 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T23:09:35.367099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.413 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 30837248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.414 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.450 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 29572096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.451 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T23:09:35.454780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.455 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 712736138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.456 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 59986442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.456 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 584056585 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.457 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 66184682 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.459 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T23:09:35.460072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.460 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.461 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.461 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.462 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.464 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.465 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T23:09:35.465112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.465 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 1113 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.466 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.467 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 1062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.467 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.469 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T23:09:35.470412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.471 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.471 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.472 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.473 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T23:09:35.475944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.476 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 73175040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.477 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.478 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.478 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.479 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.479 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.479 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 4035457672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T23:09:35.479711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.480 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.480 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 6596104133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.480 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.481 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T23:09:35.482057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.503 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/cpu volume: 334120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.524 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/cpu volume: 281200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.526 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.527 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.528 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T23:09:35.527094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.530 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T23:09:35.531381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.532 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.533 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.534 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.535 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T23:09:35.535050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.536 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.537 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.538 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.539 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T23:09:35.540683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T23:09:35.543716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.544 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.545 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.546 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.546 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.547 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T23:09:35.547707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.549 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.551 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T23:09:35.550867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.552 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.553 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.554 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.555 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T23:09:35.554547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.555 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.555 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.556 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.556 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T23:09:35.556155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.557 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.558 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.558 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T23:09:35.557963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.559 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.559 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/memory.usage volume: 42.34375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.560 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/memory.usage volume: 43.484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T23:09:35.559732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.562 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T23:09:35.561761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.562 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.563 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:09:35.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:09:35 compute-0 nova_compute[189508]: 2025-12-01 23:09:35.954 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:36 compute-0 podman[258208]: 2025-12-01 23:09:36.841063616 +0000 UTC m=+0.102179228 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:09:36 compute-0 podman[258211]: 2025-12-01 23:09:36.873712682 +0000 UTC m=+0.118876742 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, name=ubi9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 23:09:36 compute-0 podman[258209]: 2025-12-01 23:09:36.873921918 +0000 UTC m=+0.128940097 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 23:09:36 compute-0 podman[258210]: 2025-12-01 23:09:36.875611916 +0000 UTC m=+0.116711891 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 23:09:40 compute-0 nova_compute[189508]: 2025-12-01 23:09:40.247 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:40 compute-0 nova_compute[189508]: 2025-12-01 23:09:40.959 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:45 compute-0 nova_compute[189508]: 2025-12-01 23:09:45.253 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:45 compute-0 nova_compute[189508]: 2025-12-01 23:09:45.962 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:50 compute-0 nova_compute[189508]: 2025-12-01 23:09:50.260 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:50 compute-0 nova_compute[189508]: 2025-12-01 23:09:50.966 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:51 compute-0 podman[258288]: 2025-12-01 23:09:51.834826832 +0000 UTC m=+0.106422849 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:09:54 compute-0 podman[258311]: 2025-12-01 23:09:54.85951258 +0000 UTC m=+0.143174051 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:09:54 compute-0 podman[258312]: 2025-12-01 23:09:54.862763612 +0000 UTC m=+0.130373768 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 23:09:55 compute-0 nova_compute[189508]: 2025-12-01 23:09:55.176 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:09:55 compute-0 nova_compute[189508]: 2025-12-01 23:09:55.265 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:55 compute-0 nova_compute[189508]: 2025-12-01 23:09:55.970 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:09:59 compute-0 podman[203693]: time="2025-12-01T23:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:09:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:09:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 23:09:59 compute-0 podman[258349]: 2025-12-01 23:09:59.860918671 +0000 UTC m=+0.122048752 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 23:09:59 compute-0 podman[258348]: 2025-12-01 23:09:59.885868948 +0000 UTC m=+0.166436820 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 23:10:00 compute-0 nova_compute[189508]: 2025-12-01 23:10:00.269 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:00 compute-0 nova_compute[189508]: 2025-12-01 23:10:00.974 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:01 compute-0 nova_compute[189508]: 2025-12-01 23:10:01.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:01 compute-0 openstack_network_exporter[205887]: ERROR   23:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:10:01 compute-0 openstack_network_exporter[205887]: ERROR   23:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:10:01 compute-0 openstack_network_exporter[205887]: ERROR   23:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:10:01 compute-0 openstack_network_exporter[205887]: ERROR   23:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:10:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:10:01 compute-0 openstack_network_exporter[205887]: ERROR   23:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:10:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:10:04 compute-0 nova_compute[189508]: 2025-12-01 23:10:04.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:04 compute-0 nova_compute[189508]: 2025-12-01 23:10:04.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:10:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:10:04.657 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:10:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:10:04.658 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:10:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:10:04.659 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:10:05 compute-0 nova_compute[189508]: 2025-12-01 23:10:05.273 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:05 compute-0 nova_compute[189508]: 2025-12-01 23:10:05.976 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:06 compute-0 nova_compute[189508]: 2025-12-01 23:10:06.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:06 compute-0 nova_compute[189508]: 2025-12-01 23:10:06.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:10:07 compute-0 nova_compute[189508]: 2025-12-01 23:10:07.014 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:10:07 compute-0 nova_compute[189508]: 2025-12-01 23:10:07.015 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:10:07 compute-0 nova_compute[189508]: 2025-12-01 23:10:07.015 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:10:07 compute-0 podman[258392]: 2025-12-01 23:10:07.808541425 +0000 UTC m=+0.090187199 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:10:07 compute-0 podman[258397]: 2025-12-01 23:10:07.819964079 +0000 UTC m=+0.090280981 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, name=ubi9, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30)
Dec  1 23:10:07 compute-0 podman[258393]: 2025-12-01 23:10:07.840881902 +0000 UTC m=+0.121690492 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 23:10:07 compute-0 podman[258394]: 2025-12-01 23:10:07.841661814 +0000 UTC m=+0.113873210 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Dec  1 23:10:09 compute-0 nova_compute[189508]: 2025-12-01 23:10:09.147 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:10:09 compute-0 nova_compute[189508]: 2025-12-01 23:10:09.197 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:10:09 compute-0 nova_compute[189508]: 2025-12-01 23:10:09.198 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:10:09 compute-0 nova_compute[189508]: 2025-12-01 23:10:09.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:09 compute-0 nova_compute[189508]: 2025-12-01 23:10:09.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:09 compute-0 nova_compute[189508]: 2025-12-01 23:10:09.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:10 compute-0 nova_compute[189508]: 2025-12-01 23:10:10.278 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:10 compute-0 nova_compute[189508]: 2025-12-01 23:10:10.979 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.234 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.236 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.237 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.352 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.443 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.446 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.529 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.542 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.617 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.620 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:10:11 compute-0 nova_compute[189508]: 2025-12-01 23:10:11.698 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.588 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.592 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4980MB free_disk=72.06573104858398GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.593 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.594 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.790 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.791 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.792 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.793 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:10:12 compute-0 nova_compute[189508]: 2025-12-01 23:10:12.865 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:10:13 compute-0 nova_compute[189508]: 2025-12-01 23:10:13.564 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:10:13 compute-0 nova_compute[189508]: 2025-12-01 23:10:13.567 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:10:13 compute-0 nova_compute[189508]: 2025-12-01 23:10:13.568 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:10:15 compute-0 nova_compute[189508]: 2025-12-01 23:10:15.284 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:15 compute-0 nova_compute[189508]: 2025-12-01 23:10:15.983 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:20 compute-0 nova_compute[189508]: 2025-12-01 23:10:20.289 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:20 compute-0 nova_compute[189508]: 2025-12-01 23:10:20.987 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:22 compute-0 podman[258482]: 2025-12-01 23:10:22.801883177 +0000 UTC m=+0.075120021 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:10:25 compute-0 nova_compute[189508]: 2025-12-01 23:10:25.295 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:25 compute-0 podman[258506]: 2025-12-01 23:10:25.810900042 +0000 UTC m=+0.092537585 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 23:10:25 compute-0 podman[258507]: 2025-12-01 23:10:25.825135946 +0000 UTC m=+0.089961142 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible)
Dec  1 23:10:25 compute-0 nova_compute[189508]: 2025-12-01 23:10:25.990 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:29 compute-0 podman[203693]: time="2025-12-01T23:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:10:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:10:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 23:10:30 compute-0 nova_compute[189508]: 2025-12-01 23:10:30.300 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:30 compute-0 podman[258545]: 2025-12-01 23:10:30.843478954 +0000 UTC m=+0.106080739 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 23:10:30 compute-0 podman[258544]: 2025-12-01 23:10:30.849660009 +0000 UTC m=+0.130926733 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  1 23:10:30 compute-0 nova_compute[189508]: 2025-12-01 23:10:30.993 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:31 compute-0 openstack_network_exporter[205887]: ERROR   23:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:10:31 compute-0 openstack_network_exporter[205887]: ERROR   23:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:10:31 compute-0 openstack_network_exporter[205887]: ERROR   23:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:10:31 compute-0 openstack_network_exporter[205887]: ERROR   23:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:10:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:10:31 compute-0 openstack_network_exporter[205887]: ERROR   23:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:10:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:10:35 compute-0 nova_compute[189508]: 2025-12-01 23:10:35.303 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:35 compute-0 nova_compute[189508]: 2025-12-01 23:10:35.996 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:38 compute-0 podman[258590]: 2025-12-01 23:10:38.805062679 +0000 UTC m=+0.079942678 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm)
Dec  1 23:10:38 compute-0 podman[258589]: 2025-12-01 23:10:38.818090628 +0000 UTC m=+0.085636179 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Dec  1 23:10:38 compute-0 podman[258588]: 2025-12-01 23:10:38.824677895 +0000 UTC m=+0.104257837 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 23:10:38 compute-0 podman[258587]: 2025-12-01 23:10:38.843581971 +0000 UTC m=+0.115200587 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:10:40 compute-0 nova_compute[189508]: 2025-12-01 23:10:40.307 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:40 compute-0 nova_compute[189508]: 2025-12-01 23:10:40.997 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:45 compute-0 nova_compute[189508]: 2025-12-01 23:10:45.311 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:46 compute-0 nova_compute[189508]: 2025-12-01 23:10:46.000 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:50 compute-0 nova_compute[189508]: 2025-12-01 23:10:50.315 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:51 compute-0 nova_compute[189508]: 2025-12-01 23:10:51.003 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:53 compute-0 podman[258667]: 2025-12-01 23:10:53.846638923 +0000 UTC m=+0.106199952 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:10:55 compute-0 nova_compute[189508]: 2025-12-01 23:10:55.320 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:56 compute-0 nova_compute[189508]: 2025-12-01 23:10:56.008 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:10:56 compute-0 podman[258693]: 2025-12-01 23:10:56.269370121 +0000 UTC m=+0.121158916 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 23:10:56 compute-0 podman[258694]: 2025-12-01 23:10:56.28591018 +0000 UTC m=+0.120726494 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125)
Dec  1 23:10:58 compute-0 nova_compute[189508]: 2025-12-01 23:10:58.566 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:58 compute-0 nova_compute[189508]: 2025-12-01 23:10:58.567 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:10:59 compute-0 podman[203693]: time="2025-12-01T23:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:10:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:10:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 23:11:00 compute-0 nova_compute[189508]: 2025-12-01 23:11:00.326 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:01 compute-0 nova_compute[189508]: 2025-12-01 23:11:01.010 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:01 compute-0 openstack_network_exporter[205887]: ERROR   23:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:11:01 compute-0 openstack_network_exporter[205887]: ERROR   23:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:11:01 compute-0 openstack_network_exporter[205887]: ERROR   23:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:11:01 compute-0 openstack_network_exporter[205887]: ERROR   23:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:11:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:11:01 compute-0 openstack_network_exporter[205887]: ERROR   23:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:11:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:11:01 compute-0 podman[258733]: 2025-12-01 23:11:01.843480052 +0000 UTC m=+0.099570655 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  1 23:11:01 compute-0 podman[258732]: 2025-12-01 23:11:01.916537874 +0000 UTC m=+0.168440118 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:11:03 compute-0 nova_compute[189508]: 2025-12-01 23:11:03.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:11:04.658 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:11:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:11:04.659 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:11:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:11:04.660 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:11:05 compute-0 nova_compute[189508]: 2025-12-01 23:11:05.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:05 compute-0 nova_compute[189508]: 2025-12-01 23:11:05.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:11:05 compute-0 nova_compute[189508]: 2025-12-01 23:11:05.331 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:06 compute-0 nova_compute[189508]: 2025-12-01 23:11:06.012 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:07 compute-0 nova_compute[189508]: 2025-12-01 23:11:07.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:08 compute-0 nova_compute[189508]: 2025-12-01 23:11:08.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:08 compute-0 nova_compute[189508]: 2025-12-01 23:11:08.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:11:08 compute-0 nova_compute[189508]: 2025-12-01 23:11:08.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:11:09 compute-0 nova_compute[189508]: 2025-12-01 23:11:09.118 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:11:09 compute-0 nova_compute[189508]: 2025-12-01 23:11:09.119 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:11:09 compute-0 nova_compute[189508]: 2025-12-01 23:11:09.119 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:11:09 compute-0 nova_compute[189508]: 2025-12-01 23:11:09.120 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:11:09 compute-0 podman[258775]: 2025-12-01 23:11:09.798936655 +0000 UTC m=+0.076511750 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:11:09 compute-0 podman[258778]: 2025-12-01 23:11:09.810665838 +0000 UTC m=+0.078894938 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 23:11:09 compute-0 podman[258776]: 2025-12-01 23:11:09.82662156 +0000 UTC m=+0.104943887 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec  1 23:11:09 compute-0 podman[258777]: 2025-12-01 23:11:09.852822963 +0000 UTC m=+0.116066452 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:11:10 compute-0 nova_compute[189508]: 2025-12-01 23:11:10.337 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:10 compute-0 nova_compute[189508]: 2025-12-01 23:11:10.660 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:11:10 compute-0 nova_compute[189508]: 2025-12-01 23:11:10.683 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:11:10 compute-0 nova_compute[189508]: 2025-12-01 23:11:10.683 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:11:10 compute-0 nova_compute[189508]: 2025-12-01 23:11:10.683 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:10 compute-0 nova_compute[189508]: 2025-12-01 23:11:10.684 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:11 compute-0 nova_compute[189508]: 2025-12-01 23:11:11.015 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:11 compute-0 nova_compute[189508]: 2025-12-01 23:11:11.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.227 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.227 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.228 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.228 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.301 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.358 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.359 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.457 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.469 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.544 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.545 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:11:12 compute-0 nova_compute[189508]: 2025-12-01 23:11:12.624 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.006 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.007 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4968MB free_disk=72.06569290161133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.007 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.008 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.467 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.468 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.468 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.468 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.546 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.559 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.561 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:11:13 compute-0 nova_compute[189508]: 2025-12-01 23:11:13.561 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:11:15 compute-0 nova_compute[189508]: 2025-12-01 23:11:15.341 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:16 compute-0 nova_compute[189508]: 2025-12-01 23:11:16.017 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:20 compute-0 nova_compute[189508]: 2025-12-01 23:11:20.349 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:21 compute-0 nova_compute[189508]: 2025-12-01 23:11:21.019 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:24 compute-0 podman[258871]: 2025-12-01 23:11:24.825749448 +0000 UTC m=+0.090236360 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:11:25 compute-0 nova_compute[189508]: 2025-12-01 23:11:25.355 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:26 compute-0 nova_compute[189508]: 2025-12-01 23:11:26.023 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:26 compute-0 podman[258894]: 2025-12-01 23:11:26.829124765 +0000 UTC m=+0.099590665 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  1 23:11:26 compute-0 podman[258893]: 2025-12-01 23:11:26.840411375 +0000 UTC m=+0.117964875 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:11:29 compute-0 podman[203693]: time="2025-12-01T23:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:11:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:11:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 23:11:30 compute-0 nova_compute[189508]: 2025-12-01 23:11:30.361 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:31 compute-0 nova_compute[189508]: 2025-12-01 23:11:31.026 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:31 compute-0 openstack_network_exporter[205887]: ERROR   23:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:11:31 compute-0 openstack_network_exporter[205887]: ERROR   23:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:11:31 compute-0 openstack_network_exporter[205887]: ERROR   23:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:11:31 compute-0 openstack_network_exporter[205887]: ERROR   23:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:11:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:11:31 compute-0 openstack_network_exporter[205887]: ERROR   23:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:11:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:11:32 compute-0 podman[258931]: 2025-12-01 23:11:32.838543169 +0000 UTC m=+0.094112270 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  1 23:11:32 compute-0 podman[258930]: 2025-12-01 23:11:32.891062378 +0000 UTC m=+0.153950247 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.279 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.279 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.291 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'name': 'te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.296 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '42680544-e423-4200-816c-a17b766a4339', 'name': 'te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.296 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T23:11:35.297002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.303 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.307 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.309 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T23:11:35.308991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.309 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.312 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.312 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T23:11:35.311399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.313 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T23:11:35.313276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.335 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.336 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.362 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.362 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T23:11:35.364896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 nova_compute[189508]: 2025-12-01 23:11:35.366 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.435 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 30837248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.436 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.501 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.501 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.502 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 712736138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.503 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 59986442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.503 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 619764847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.503 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 76385983 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T23:11:35.502789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.504 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.505 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.505 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.506 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 1113 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.506 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.506 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.506 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T23:11:35.504520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T23:11:35.506011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.507 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.508 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.508 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T23:11:35.507798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.508 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.509 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 73175040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.509 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.510 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T23:11:35.509327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.510 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.510 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.511 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 4035457672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.511 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.511 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 6648799272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T23:11:35.511095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.512 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.512 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T23:11:35.513051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.545 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/cpu volume: 335840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.583 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/cpu volume: 334040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.584 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.584 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.585 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.586 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T23:11:35.584749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.586 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T23:11:35.586052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T23:11:35.587384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.588 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.588 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.588 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.590 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.590 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T23:11:35.589354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T23:11:35.590239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T23:11:35.591572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.592 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.593 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T23:11:35.592522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T23:11:35.593654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T23:11:35.594962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.596 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.596 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/memory.usage volume: 42.3671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.597 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/memory.usage volume: 42.390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.598 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.599 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T23:11:35.596099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T23:11:35.597538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T23:11:35.598780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:11:35.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:11:36 compute-0 nova_compute[189508]: 2025-12-01 23:11:36.029 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:40 compute-0 nova_compute[189508]: 2025-12-01 23:11:40.370 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:40 compute-0 podman[258974]: 2025-12-01 23:11:40.820182387 +0000 UTC m=+0.094137320 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:11:40 compute-0 podman[258975]: 2025-12-01 23:11:40.847170673 +0000 UTC m=+0.111265487 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:11:40 compute-0 podman[258977]: 2025-12-01 23:11:40.865455741 +0000 UTC m=+0.117033150 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 23:11:40 compute-0 podman[258976]: 2025-12-01 23:11:40.875685891 +0000 UTC m=+0.136695627 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1755695350, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm)
Dec  1 23:11:41 compute-0 nova_compute[189508]: 2025-12-01 23:11:41.032 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:45 compute-0 nova_compute[189508]: 2025-12-01 23:11:45.374 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:46 compute-0 nova_compute[189508]: 2025-12-01 23:11:46.035 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:50 compute-0 nova_compute[189508]: 2025-12-01 23:11:50.378 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:51 compute-0 nova_compute[189508]: 2025-12-01 23:11:51.037 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:55 compute-0 nova_compute[189508]: 2025-12-01 23:11:55.382 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:55 compute-0 podman[259056]: 2025-12-01 23:11:55.830365039 +0000 UTC m=+0.107465468 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 23:11:56 compute-0 nova_compute[189508]: 2025-12-01 23:11:56.040 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:11:57 compute-0 nova_compute[189508]: 2025-12-01 23:11:57.557 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:11:57 compute-0 podman[259080]: 2025-12-01 23:11:57.824888867 +0000 UTC m=+0.106629195 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 23:11:57 compute-0 podman[259081]: 2025-12-01 23:11:57.827331296 +0000 UTC m=+0.106245614 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:11:59 compute-0 podman[203693]: time="2025-12-01T23:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:11:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:11:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 23:12:00 compute-0 nova_compute[189508]: 2025-12-01 23:12:00.387 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:01 compute-0 nova_compute[189508]: 2025-12-01 23:12:01.042 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:01 compute-0 openstack_network_exporter[205887]: ERROR   23:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:12:01 compute-0 openstack_network_exporter[205887]: ERROR   23:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:12:01 compute-0 openstack_network_exporter[205887]: ERROR   23:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:12:01 compute-0 openstack_network_exporter[205887]: ERROR   23:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:12:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:12:01 compute-0 openstack_network_exporter[205887]: ERROR   23:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:12:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:12:03 compute-0 podman[259118]: 2025-12-01 23:12:03.849776098 +0000 UTC m=+0.117702148 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 23:12:03 compute-0 podman[259117]: 2025-12-01 23:12:03.87065855 +0000 UTC m=+0.146879286 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 23:12:04 compute-0 nova_compute[189508]: 2025-12-01 23:12:04.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:12:04.660 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:12:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:12:04.660 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:12:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:12:04.661 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:12:05 compute-0 nova_compute[189508]: 2025-12-01 23:12:05.391 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:06 compute-0 nova_compute[189508]: 2025-12-01 23:12:06.045 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:06 compute-0 nova_compute[189508]: 2025-12-01 23:12:06.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:06 compute-0 nova_compute[189508]: 2025-12-01 23:12:06.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:12:07 compute-0 nova_compute[189508]: 2025-12-01 23:12:07.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:09 compute-0 nova_compute[189508]: 2025-12-01 23:12:09.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:10 compute-0 nova_compute[189508]: 2025-12-01 23:12:10.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:10 compute-0 nova_compute[189508]: 2025-12-01 23:12:10.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:12:10 compute-0 nova_compute[189508]: 2025-12-01 23:12:10.398 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:11 compute-0 nova_compute[189508]: 2025-12-01 23:12:11.035 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:12:11 compute-0 nova_compute[189508]: 2025-12-01 23:12:11.036 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:12:11 compute-0 nova_compute[189508]: 2025-12-01 23:12:11.036 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:12:11 compute-0 nova_compute[189508]: 2025-12-01 23:12:11.048 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:11 compute-0 podman[259159]: 2025-12-01 23:12:11.796892538 +0000 UTC m=+0.075270705 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:12:11 compute-0 podman[259161]: 2025-12-01 23:12:11.797902837 +0000 UTC m=+0.076736977 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:12:11 compute-0 podman[259160]: 2025-12-01 23:12:11.814021484 +0000 UTC m=+0.082301545 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 23:12:11 compute-0 podman[259162]: 2025-12-01 23:12:11.834831614 +0000 UTC m=+0.094628574 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.buildah.version=1.29.0)
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.654 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.720 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.720 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.721 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.721 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.722 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.808 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.809 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.809 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:12:12 compute-0 nova_compute[189508]: 2025-12-01 23:12:12.809 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.153 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.241 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.242 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.304 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.310 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.363 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.364 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:12:13 compute-0 nova_compute[189508]: 2025-12-01 23:12:13.418 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:12:14 compute-0 nova_compute[189508]: 2025-12-01 23:12:14.255 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:12:14 compute-0 nova_compute[189508]: 2025-12-01 23:12:14.257 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4976MB free_disk=72.06573867797852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:12:14 compute-0 nova_compute[189508]: 2025-12-01 23:12:14.258 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:12:14 compute-0 nova_compute[189508]: 2025-12-01 23:12:14.258 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.359 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.360 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.360 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.360 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.403 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.422 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.438 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.440 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:12:15 compute-0 nova_compute[189508]: 2025-12-01 23:12:15.441 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:12:16 compute-0 nova_compute[189508]: 2025-12-01 23:12:16.050 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:20 compute-0 nova_compute[189508]: 2025-12-01 23:12:20.407 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:21 compute-0 nova_compute[189508]: 2025-12-01 23:12:21.052 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:25 compute-0 nova_compute[189508]: 2025-12-01 23:12:25.411 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:26 compute-0 nova_compute[189508]: 2025-12-01 23:12:26.054 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:26 compute-0 podman[259254]: 2025-12-01 23:12:26.82711862 +0000 UTC m=+0.101337494 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:12:28 compute-0 podman[259276]: 2025-12-01 23:12:28.879837295 +0000 UTC m=+0.143866230 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 23:12:28 compute-0 podman[259277]: 2025-12-01 23:12:28.885696741 +0000 UTC m=+0.155765767 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 23:12:29 compute-0 podman[203693]: time="2025-12-01T23:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:12:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:12:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 23:12:30 compute-0 nova_compute[189508]: 2025-12-01 23:12:30.414 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:31 compute-0 nova_compute[189508]: 2025-12-01 23:12:31.057 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:31 compute-0 openstack_network_exporter[205887]: ERROR   23:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:12:31 compute-0 openstack_network_exporter[205887]: ERROR   23:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:12:31 compute-0 openstack_network_exporter[205887]: ERROR   23:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:12:31 compute-0 openstack_network_exporter[205887]: ERROR   23:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:12:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:12:31 compute-0 openstack_network_exporter[205887]: ERROR   23:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:12:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:12:34 compute-0 podman[259315]: 2025-12-01 23:12:34.821780594 +0000 UTC m=+0.088183951 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  1 23:12:34 compute-0 podman[259314]: 2025-12-01 23:12:34.910209711 +0000 UTC m=+0.173159591 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 23:12:35 compute-0 nova_compute[189508]: 2025-12-01 23:12:35.418 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:36 compute-0 nova_compute[189508]: 2025-12-01 23:12:36.059 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:40 compute-0 nova_compute[189508]: 2025-12-01 23:12:40.423 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:41 compute-0 nova_compute[189508]: 2025-12-01 23:12:41.061 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:42 compute-0 podman[259356]: 2025-12-01 23:12:42.83296471 +0000 UTC m=+0.104306039 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:12:42 compute-0 podman[259359]: 2025-12-01 23:12:42.840063721 +0000 UTC m=+0.101478579 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Dec  1 23:12:42 compute-0 podman[259358]: 2025-12-01 23:12:42.842733457 +0000 UTC m=+0.102821367 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter)
Dec  1 23:12:42 compute-0 podman[259357]: 2025-12-01 23:12:42.852456872 +0000 UTC m=+0.124783289 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 23:12:45 compute-0 nova_compute[189508]: 2025-12-01 23:12:45.428 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:46 compute-0 nova_compute[189508]: 2025-12-01 23:12:46.064 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:50 compute-0 nova_compute[189508]: 2025-12-01 23:12:50.433 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:51 compute-0 nova_compute[189508]: 2025-12-01 23:12:51.066 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:55 compute-0 nova_compute[189508]: 2025-12-01 23:12:55.439 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:56 compute-0 nova_compute[189508]: 2025-12-01 23:12:56.069 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:12:57 compute-0 podman[259437]: 2025-12-01 23:12:57.833345647 +0000 UTC m=+0.112191222 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:12:59 compute-0 podman[203693]: time="2025-12-01T23:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:12:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:12:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 23:12:59 compute-0 podman[259460]: 2025-12-01 23:12:59.839275256 +0000 UTC m=+0.111234264 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  1 23:12:59 compute-0 podman[259461]: 2025-12-01 23:12:59.860111167 +0000 UTC m=+0.119096207 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 23:13:00 compute-0 nova_compute[189508]: 2025-12-01 23:13:00.447 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:01 compute-0 nova_compute[189508]: 2025-12-01 23:13:01.071 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:01 compute-0 openstack_network_exporter[205887]: ERROR   23:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:13:01 compute-0 openstack_network_exporter[205887]: ERROR   23:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:13:01 compute-0 openstack_network_exporter[205887]: ERROR   23:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:13:01 compute-0 openstack_network_exporter[205887]: ERROR   23:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:13:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:13:01 compute-0 openstack_network_exporter[205887]: ERROR   23:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:13:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:13:02 compute-0 nova_compute[189508]: 2025-12-01 23:13:02.435 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:02 compute-0 nova_compute[189508]: 2025-12-01 23:13:02.436 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:13:04.662 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:13:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:13:04.663 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:13:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:13:04.664 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:13:05 compute-0 nova_compute[189508]: 2025-12-01 23:13:05.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:05 compute-0 nova_compute[189508]: 2025-12-01 23:13:05.453 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:05 compute-0 podman[259496]: 2025-12-01 23:13:05.871244119 +0000 UTC m=+0.125190171 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  1 23:13:05 compute-0 podman[259495]: 2025-12-01 23:13:05.893964543 +0000 UTC m=+0.170450504 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:13:06 compute-0 nova_compute[189508]: 2025-12-01 23:13:06.075 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:07 compute-0 nova_compute[189508]: 2025-12-01 23:13:07.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:07 compute-0 nova_compute[189508]: 2025-12-01 23:13:07.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:13:08 compute-0 nova_compute[189508]: 2025-12-01 23:13:08.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:10 compute-0 nova_compute[189508]: 2025-12-01 23:13:10.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:10 compute-0 nova_compute[189508]: 2025-12-01 23:13:10.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:10 compute-0 nova_compute[189508]: 2025-12-01 23:13:10.459 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:11 compute-0 nova_compute[189508]: 2025-12-01 23:13:11.079 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:11 compute-0 nova_compute[189508]: 2025-12-01 23:13:11.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:11 compute-0 nova_compute[189508]: 2025-12-01 23:13:11.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:13:11 compute-0 nova_compute[189508]: 2025-12-01 23:13:11.203 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:13:12 compute-0 nova_compute[189508]: 2025-12-01 23:13:12.193 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:13:12 compute-0 nova_compute[189508]: 2025-12-01 23:13:12.194 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:13:12 compute-0 nova_compute[189508]: 2025-12-01 23:13:12.194 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:13:12 compute-0 nova_compute[189508]: 2025-12-01 23:13:12.194 189512 DEBUG nova.objects.instance [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:13:13 compute-0 podman[259544]: 2025-12-01 23:13:13.855678306 +0000 UTC m=+0.114260271 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 23:13:13 compute-0 podman[259546]: 2025-12-01 23:13:13.873653865 +0000 UTC m=+0.120812287 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Dec  1 23:13:13 compute-0 podman[259543]: 2025-12-01 23:13:13.880717056 +0000 UTC m=+0.140869346 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:13:13 compute-0 podman[259545]: 2025-12-01 23:13:13.896497953 +0000 UTC m=+0.142707338 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.201 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [{"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.232 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-91dfa889-2ab6-4683-bc07-870d2df30bdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.233 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.234 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.234 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.266 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.266 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.267 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.268 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.351 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.412 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.414 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.465 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.474 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.482 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.550 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.551 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:13:15 compute-0 nova_compute[189508]: 2025-12-01 23:13:15.611 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.039 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.040 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4974MB free_disk=72.06573867797852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.041 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.041 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.081 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.330 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.331 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.332 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.332 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.441 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.537 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.537 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.557 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.600 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.677 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.704 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.707 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:13:16 compute-0 nova_compute[189508]: 2025-12-01 23:13:16.707 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:13:20 compute-0 nova_compute[189508]: 2025-12-01 23:13:20.471 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:21 compute-0 nova_compute[189508]: 2025-12-01 23:13:21.082 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:23 compute-0 nova_compute[189508]: 2025-12-01 23:13:23.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:23 compute-0 nova_compute[189508]: 2025-12-01 23:13:23.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 23:13:23 compute-0 nova_compute[189508]: 2025-12-01 23:13:23.228 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 23:13:25 compute-0 nova_compute[189508]: 2025-12-01 23:13:25.476 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.048 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.085 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.105 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.106 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Triggering sync for uuid 42680544-e423-4200-816c-a17b766a4339 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.106 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.107 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.108 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "42680544-e423-4200-816c-a17b766a4339" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.108 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "42680544-e423-4200-816c-a17b766a4339" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.160 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:13:26 compute-0 nova_compute[189508]: 2025-12-01 23:13:26.163 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "42680544-e423-4200-816c-a17b766a4339" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:13:28 compute-0 podman[259634]: 2025-12-01 23:13:28.811933885 +0000 UTC m=+0.094842870 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:13:29 compute-0 nova_compute[189508]: 2025-12-01 23:13:29.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:29 compute-0 nova_compute[189508]: 2025-12-01 23:13:29.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 23:13:29 compute-0 podman[203693]: time="2025-12-01T23:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:13:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:13:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 23:13:30 compute-0 nova_compute[189508]: 2025-12-01 23:13:30.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:30 compute-0 nova_compute[189508]: 2025-12-01 23:13:30.481 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:30 compute-0 podman[259656]: 2025-12-01 23:13:30.877834776 +0000 UTC m=+0.134586817 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec  1 23:13:30 compute-0 podman[259657]: 2025-12-01 23:13:30.90337114 +0000 UTC m=+0.154735679 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Dec  1 23:13:31 compute-0 nova_compute[189508]: 2025-12-01 23:13:31.087 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:31 compute-0 openstack_network_exporter[205887]: ERROR   23:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:13:31 compute-0 openstack_network_exporter[205887]: ERROR   23:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:13:31 compute-0 openstack_network_exporter[205887]: ERROR   23:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:13:31 compute-0 openstack_network_exporter[205887]: ERROR   23:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:13:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:13:31 compute-0 openstack_network_exporter[205887]: ERROR   23:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:13:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.280 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.280 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.280 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.281 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.289 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'name': 'te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.294 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '42680544-e423-4200-816c-a17b766a4339', 'name': 'te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r', 'flavor': {'id': '2e42a55e-71e2-4041-8ca2-725d63f058bf', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'user_id': '31117d25a4e94964a6d197de21b13cbe', 'hostId': '6371054f80a0ac1fb11dac1293ce9e4cad9937bba665381127450a90', 'status': 'active', 'metadata': {'metering.server_group': '3dac0f46-9f79-460b-b6c5-9876493d569a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.295 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T23:13:35.296204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.302 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.310 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.313 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.314 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.315 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T23:13:35.314070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.317 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.318 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.319 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T23:13:35.318108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T23:13:35.321740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.343 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.343 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.358 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.359 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T23:13:35.360212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.394 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 30837248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.395 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.428 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.428 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.429 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.430 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 712736138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.430 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.latency volume: 59986442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.431 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 619764847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T23:13:35.430067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.431 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.latency volume: 76385983 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T23:13:35.433212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.433 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.434 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.434 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.435 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.437 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 1113 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T23:13:35.437113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.438 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.438 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.439 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.441 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.442 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.442 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T23:13:35.441252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.443 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.445 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.446 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 73175040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T23:13:35.445603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.446 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.447 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.447 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.448 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.448 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.449 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.449 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.449 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 4035457672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T23:13:35.449432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.450 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.451 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 6648799272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.451 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.452 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T23:13:35.453240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.474 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/cpu volume: 337490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 nova_compute[189508]: 2025-12-01 23:13:35.484 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.492 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/cpu volume: 335830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.493 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.494 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.494 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.495 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.496 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.496 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T23:13:35.493917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T23:13:35.495835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.497 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T23:13:35.497736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.498 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.498 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.498 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.499 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T23:13:35.500662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.501 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.502 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T23:13:35.502259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.502 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.505 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.505 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.506 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T23:13:35.503946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.508 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T23:13:35.505731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.508 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T23:13:35.508142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.510 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.510 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T23:13:35.509910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.512 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T23:13:35.512023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.512 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.513 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.514 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/memory.usage volume: 42.3671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.515 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/memory.usage volume: 42.390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T23:13:35.514478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.515 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.516 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.517 14 DEBUG ceilometer.compute.pollsters [-] 91dfa889-2ab6-4683-bc07-870d2df30bdd/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T23:13:35.517055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.517 14 DEBUG ceilometer.compute.pollsters [-] 42680544-e423-4200-816c-a17b766a4339/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:13:35.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:13:36 compute-0 nova_compute[189508]: 2025-12-01 23:13:36.091 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:36 compute-0 podman[259695]: 2025-12-01 23:13:36.866521814 +0000 UTC m=+0.134849161 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 23:13:36 compute-0 podman[259696]: 2025-12-01 23:13:36.868971602 +0000 UTC m=+0.119468100 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 23:13:40 compute-0 nova_compute[189508]: 2025-12-01 23:13:40.486 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:41 compute-0 nova_compute[189508]: 2025-12-01 23:13:41.094 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:44 compute-0 podman[259739]: 2025-12-01 23:13:44.778556427 +0000 UTC m=+0.087035674 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:13:44 compute-0 podman[259740]: 2025-12-01 23:13:44.799324777 +0000 UTC m=+0.093342130 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 23:13:44 compute-0 podman[259747]: 2025-12-01 23:13:44.80121005 +0000 UTC m=+0.089002719 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_id=edpm, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Dec  1 23:13:44 compute-0 podman[259745]: 2025-12-01 23:13:44.827344631 +0000 UTC m=+0.109397699 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 23:13:45 compute-0 nova_compute[189508]: 2025-12-01 23:13:45.492 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:46 compute-0 nova_compute[189508]: 2025-12-01 23:13:46.099 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:50 compute-0 nova_compute[189508]: 2025-12-01 23:13:50.498 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:51 compute-0 nova_compute[189508]: 2025-12-01 23:13:51.103 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:55 compute-0 nova_compute[189508]: 2025-12-01 23:13:55.503 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:56 compute-0 nova_compute[189508]: 2025-12-01 23:13:56.106 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:13:59 compute-0 nova_compute[189508]: 2025-12-01 23:13:59.206 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:13:59 compute-0 podman[203693]: time="2025-12-01T23:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:13:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:13:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 23:13:59 compute-0 podman[259816]: 2025-12-01 23:13:59.84607431 +0000 UTC m=+0.121696343 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:14:00 compute-0 nova_compute[189508]: 2025-12-01 23:14:00.507 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:01 compute-0 nova_compute[189508]: 2025-12-01 23:14:01.108 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:01 compute-0 openstack_network_exporter[205887]: ERROR   23:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:14:01 compute-0 openstack_network_exporter[205887]: ERROR   23:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:14:01 compute-0 openstack_network_exporter[205887]: ERROR   23:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:14:01 compute-0 openstack_network_exporter[205887]: ERROR   23:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:14:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:14:01 compute-0 openstack_network_exporter[205887]: ERROR   23:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:14:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:14:01 compute-0 podman[259839]: 2025-12-01 23:14:01.791266674 +0000 UTC m=+0.072880939 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:14:01 compute-0 podman[259840]: 2025-12-01 23:14:01.804984547 +0000 UTC m=+0.079248006 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Dec  1 23:14:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:04.663 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:04.665 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:04.666 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:05 compute-0 nova_compute[189508]: 2025-12-01 23:14:05.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:05 compute-0 nova_compute[189508]: 2025-12-01 23:14:05.510 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:06 compute-0 nova_compute[189508]: 2025-12-01 23:14:06.110 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:07 compute-0 podman[259879]: 2025-12-01 23:14:07.815695951 +0000 UTC m=+0.082183237 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 23:14:07 compute-0 podman[259878]: 2025-12-01 23:14:07.843891249 +0000 UTC m=+0.120621282 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 23:14:08 compute-0 nova_compute[189508]: 2025-12-01 23:14:08.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:08 compute-0 nova_compute[189508]: 2025-12-01 23:14:08.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:14:09 compute-0 nova_compute[189508]: 2025-12-01 23:14:09.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:10 compute-0 nova_compute[189508]: 2025-12-01 23:14:10.201 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:10 compute-0 nova_compute[189508]: 2025-12-01 23:14:10.515 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:11 compute-0 nova_compute[189508]: 2025-12-01 23:14:11.116 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:11 compute-0 nova_compute[189508]: 2025-12-01 23:14:11.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:13 compute-0 nova_compute[189508]: 2025-12-01 23:14:13.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:13 compute-0 nova_compute[189508]: 2025-12-01 23:14:13.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:14:14 compute-0 nova_compute[189508]: 2025-12-01 23:14:14.247 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 23:14:14 compute-0 nova_compute[189508]: 2025-12-01 23:14:14.247 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquired lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 23:14:14 compute-0 nova_compute[189508]: 2025-12-01 23:14:14.248 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 23:14:15 compute-0 nova_compute[189508]: 2025-12-01 23:14:15.522 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:15 compute-0 podman[259918]: 2025-12-01 23:14:15.812441727 +0000 UTC m=+0.087998201 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:14:15 compute-0 podman[259919]: 2025-12-01 23:14:15.834629298 +0000 UTC m=+0.088255919 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:14:15 compute-0 podman[259920]: 2025-12-01 23:14:15.853239298 +0000 UTC m=+0.105773538 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Dec  1 23:14:15 compute-0 podman[259925]: 2025-12-01 23:14:15.865343076 +0000 UTC m=+0.121567969 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.118 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.551 189512 DEBUG nova.network.neutron [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [{"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.566 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Releasing lock "refresh_cache-42680544-e423-4200-816c-a17b766a4339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.567 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.568 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.569 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.595 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.597 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.598 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.599 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.706 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.779 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.780 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.859 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.867 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.934 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:14:16 compute-0 nova_compute[189508]: 2025-12-01 23:14:16.936 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.001 189512 DEBUG oslo_concurrency.processutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.393 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.396 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4971MB free_disk=72.06573867797852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.397 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.398 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.545 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 91dfa889-2ab6-4683-bc07-870d2df30bdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.546 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Instance 42680544-e423-4200-816c-a17b766a4339 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.547 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.548 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.662 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.682 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.684 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:14:17 compute-0 nova_compute[189508]: 2025-12-01 23:14:17.685 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:20 compute-0 nova_compute[189508]: 2025-12-01 23:14:20.525 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:21 compute-0 nova_compute[189508]: 2025-12-01 23:14:21.120 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:25 compute-0 nova_compute[189508]: 2025-12-01 23:14:25.529 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:26 compute-0 nova_compute[189508]: 2025-12-01 23:14:26.124 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:29 compute-0 podman[203693]: time="2025-12-01T23:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:14:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Dec  1 23:14:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 23:14:30 compute-0 nova_compute[189508]: 2025-12-01 23:14:30.534 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:30 compute-0 podman[260010]: 2025-12-01 23:14:30.829122856 +0000 UTC m=+0.094392429 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:14:31 compute-0 nova_compute[189508]: 2025-12-01 23:14:31.125 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:31 compute-0 openstack_network_exporter[205887]: ERROR   23:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:14:31 compute-0 openstack_network_exporter[205887]: ERROR   23:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:14:31 compute-0 openstack_network_exporter[205887]: ERROR   23:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:14:31 compute-0 openstack_network_exporter[205887]: ERROR   23:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:14:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:14:31 compute-0 openstack_network_exporter[205887]: ERROR   23:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:14:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:14:32 compute-0 podman[260033]: 2025-12-01 23:14:32.847057626 +0000 UTC m=+0.108523124 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 23:14:32 compute-0 podman[260034]: 2025-12-01 23:14:32.851007946 +0000 UTC m=+0.115311804 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 23:14:35 compute-0 nova_compute[189508]: 2025-12-01 23:14:35.538 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:36 compute-0 nova_compute[189508]: 2025-12-01 23:14:36.128 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:38 compute-0 podman[260072]: 2025-12-01 23:14:38.833331663 +0000 UTC m=+0.102219729 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 23:14:38 compute-0 podman[260071]: 2025-12-01 23:14:38.873822765 +0000 UTC m=+0.147201556 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.016 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.017 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.018 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.019 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.019 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.022 189512 INFO nova.compute.manager [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Terminating instance#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.024 189512 DEBUG nova.compute.manager [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 23:14:40 compute-0 kernel: tap0eb5530e-04 (unregistering): left promiscuous mode
Dec  1 23:14:40 compute-0 NetworkManager[56278]: <info>  [1764630880.0755] device (tap0eb5530e-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 23:14:40 compute-0 ovn_controller[97770]: 2025-12-01T23:14:40Z|00173|binding|INFO|Releasing lport 0eb5530e-04fb-4ba5-821f-1494d355dfa5 from this chassis (sb_readonly=0)
Dec  1 23:14:40 compute-0 ovn_controller[97770]: 2025-12-01T23:14:40Z|00174|binding|INFO|Setting lport 0eb5530e-04fb-4ba5-821f-1494d355dfa5 down in Southbound
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.095 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 ovn_controller[97770]: 2025-12-01T23:14:40Z|00175|binding|INFO|Removing iface tap0eb5530e-04 ovn-installed in OVS
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.106 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.108 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:86:00 10.100.2.225'], port_security=['fa:16:3e:c3:86:00 10.100.2.225'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.225/16', 'neutron:device_id': '91dfa889-2ab6-4683-bc07-870d2df30bdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76005ead-26ac-4245-b45f-b052ffa2d506', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1db1c83-5a48-462b-b1b5-4f849ee50fec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39384b3e-eb99-4e89-ab68-0d8f0f8766e1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=0eb5530e-04fb-4ba5-821f-1494d355dfa5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.110 106662 INFO neutron.agent.ovn.metadata.agent [-] Port 0eb5530e-04fb-4ba5-821f-1494d355dfa5 in datapath 76005ead-26ac-4245-b45f-b052ffa2d506 unbound from our chassis#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.111 106662 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76005ead-26ac-4245-b45f-b052ffa2d506#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.125 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.136 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[01c58bcf-5a0b-41c2-89e9-2299ad23f2a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:40 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  1 23:14:40 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 7min 26.471s CPU time.
Dec  1 23:14:40 compute-0 systemd-machined[155759]: Machine qemu-15-instance-0000000e terminated.
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.177 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[5e053f5b-b562-4fe3-bc07-b39a080400a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.181 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[88fc822b-7da9-4a43-8373-6d83460be921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.228 240026 DEBUG oslo.privsep.daemon [-] privsep: reply[c179e128-3edd-436b-bdcd-f8beef93c212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.252 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[39c48972-ce96-478e-b159-a0d076d986f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76005ead-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:7d:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553339, 'reachable_time': 16040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260125, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.259 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.266 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.279 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[b896a7c8-fef7-4660-9d80-67dfc6763917]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76005ead-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553353, 'tstamp': 553353}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260129, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap76005ead-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553356, 'tstamp': 553356}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260129, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.281 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76005ead-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.282 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.289 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.289 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76005ead-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.291 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.291 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76005ead-20, col_values=(('external_ids', {'iface-id': '6cd00ec7-5de6-4094-b01c-8ff2beea0431'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.292 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.308 189512 INFO nova.virt.libvirt.driver [-] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Instance destroyed successfully.#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.309 189512 DEBUG nova.objects.instance [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lazy-loading 'resources' on Instance uuid 91dfa889-2ab6-4683-bc07-870d2df30bdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.336 189512 DEBUG nova.virt.libvirt.vif [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T23:00:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7337297-asg-fmnosfr5uizj-dtzzjjxvb4pp-4xpcj3x3kzsh',id=14,image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T23:00:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='3dac0f46-9f79-460b-b6c5-9876493d569a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0bc498794944fb4bfd74d85d99d70b2',ramdisk_id='',reservation_id='r-oyeail70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-2049243380',owner_user_name='tempest-PrometheusGabbiTest-2049243380-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T23:00:25Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='31117d25a4e94964a6d197de21b13cbe',uuid=91dfa889-2ab6-4683-bc07-870d2df30bdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.337 189512 DEBUG nova.network.os_vif_util [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converting VIF {"id": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "address": "fa:16:3e:c3:86:00", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eb5530e-04", "ovs_interfaceid": "0eb5530e-04fb-4ba5-821f-1494d355dfa5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.338 189512 DEBUG nova.network.os_vif_util [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:86:00,bridge_name='br-int',has_traffic_filtering=True,id=0eb5530e-04fb-4ba5-821f-1494d355dfa5,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eb5530e-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.339 189512 DEBUG os_vif [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:86:00,bridge_name='br-int',has_traffic_filtering=True,id=0eb5530e-04fb-4ba5-821f-1494d355dfa5,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eb5530e-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.342 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.343 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0eb5530e-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.346 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.348 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.353 189512 INFO os_vif [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:86:00,bridge_name='br-int',has_traffic_filtering=True,id=0eb5530e-04fb-4ba5-821f-1494d355dfa5,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eb5530e-04')#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.354 189512 INFO nova.virt.libvirt.driver [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Deleting instance files /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd_del#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.354 189512 INFO nova.virt.libvirt.driver [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Deletion of /var/lib/nova/instances/91dfa889-2ab6-4683-bc07-870d2df30bdd_del complete#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.444 189512 INFO nova.compute.manager [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.445 189512 DEBUG oslo.service.loopingcall [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.445 189512 DEBUG nova.compute.manager [-] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.445 189512 DEBUG nova.network.neutron [-] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.504 189512 DEBUG nova.compute.manager [req-248815af-91cf-4c27-8921-84fc5dd276ff req-a0c8eca7-c4ca-4f71-bd0a-fb0b3d3599c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received event network-vif-unplugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.505 189512 DEBUG oslo_concurrency.lockutils [req-248815af-91cf-4c27-8921-84fc5dd276ff req-a0c8eca7-c4ca-4f71-bd0a-fb0b3d3599c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.505 189512 DEBUG oslo_concurrency.lockutils [req-248815af-91cf-4c27-8921-84fc5dd276ff req-a0c8eca7-c4ca-4f71-bd0a-fb0b3d3599c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.506 189512 DEBUG oslo_concurrency.lockutils [req-248815af-91cf-4c27-8921-84fc5dd276ff req-a0c8eca7-c4ca-4f71-bd0a-fb0b3d3599c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.506 189512 DEBUG nova.compute.manager [req-248815af-91cf-4c27-8921-84fc5dd276ff req-a0c8eca7-c4ca-4f71-bd0a-fb0b3d3599c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] No waiting events found dispatching network-vif-unplugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.506 189512 DEBUG nova.compute.manager [req-248815af-91cf-4c27-8921-84fc5dd276ff req-a0c8eca7-c4ca-4f71-bd0a-fb0b3d3599c9 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received event network-vif-unplugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.684 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:14:40 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:40.687 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 23:14:40 compute-0 nova_compute[189508]: 2025-12-01 23:14:40.691 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.016 189512 DEBUG nova.network.neutron [-] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.035 189512 INFO nova.compute.manager [-] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Took 0.59 seconds to deallocate network for instance.#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.104 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.105 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.130 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.201 189512 DEBUG nova.compute.provider_tree [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.221 189512 DEBUG nova.scheduler.client.report [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.241 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.264 189512 INFO nova.scheduler.client.report [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Deleted allocations for instance 91dfa889-2ab6-4683-bc07-870d2df30bdd#033[00m
Dec  1 23:14:41 compute-0 nova_compute[189508]: 2025-12-01 23:14:41.329 189512 DEBUG oslo_concurrency.lockutils [None req-0031cbe8-a824-4f5c-a21b-e0e605e6ac28 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:42 compute-0 nova_compute[189508]: 2025-12-01 23:14:42.635 189512 DEBUG nova.compute.manager [req-2f3a1591-3906-4d7d-a993-d9bcb24f8b3a req-eb5e8019-fd5f-4e9e-84d4-f9079ed3ad9c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received event network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:14:42 compute-0 nova_compute[189508]: 2025-12-01 23:14:42.636 189512 DEBUG oslo_concurrency.lockutils [req-2f3a1591-3906-4d7d-a993-d9bcb24f8b3a req-eb5e8019-fd5f-4e9e-84d4-f9079ed3ad9c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:42 compute-0 nova_compute[189508]: 2025-12-01 23:14:42.637 189512 DEBUG oslo_concurrency.lockutils [req-2f3a1591-3906-4d7d-a993-d9bcb24f8b3a req-eb5e8019-fd5f-4e9e-84d4-f9079ed3ad9c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:42 compute-0 nova_compute[189508]: 2025-12-01 23:14:42.638 189512 DEBUG oslo_concurrency.lockutils [req-2f3a1591-3906-4d7d-a993-d9bcb24f8b3a req-eb5e8019-fd5f-4e9e-84d4-f9079ed3ad9c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "91dfa889-2ab6-4683-bc07-870d2df30bdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:42 compute-0 nova_compute[189508]: 2025-12-01 23:14:42.639 189512 DEBUG nova.compute.manager [req-2f3a1591-3906-4d7d-a993-d9bcb24f8b3a req-eb5e8019-fd5f-4e9e-84d4-f9079ed3ad9c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] No waiting events found dispatching network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:14:42 compute-0 nova_compute[189508]: 2025-12-01 23:14:42.640 189512 WARNING nova.compute.manager [req-2f3a1591-3906-4d7d-a993-d9bcb24f8b3a req-eb5e8019-fd5f-4e9e-84d4-f9079ed3ad9c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received unexpected event network-vif-plugged-0eb5530e-04fb-4ba5-821f-1494d355dfa5 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 23:14:42 compute-0 nova_compute[189508]: 2025-12-01 23:14:42.641 189512 DEBUG nova.compute.manager [req-2f3a1591-3906-4d7d-a993-d9bcb24f8b3a req-eb5e8019-fd5f-4e9e-84d4-f9079ed3ad9c c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Received event network-vif-deleted-0eb5530e-04fb-4ba5-821f-1494d355dfa5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:14:45 compute-0 nova_compute[189508]: 2025-12-01 23:14:45.347 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:46 compute-0 nova_compute[189508]: 2025-12-01 23:14:46.133 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:46 compute-0 podman[260144]: 2025-12-01 23:14:46.834169788 +0000 UTC m=+0.093122864 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:14:46 compute-0 podman[260150]: 2025-12-01 23:14:46.845369231 +0000 UTC m=+0.097457085 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container)
Dec  1 23:14:46 compute-0 podman[260145]: 2025-12-01 23:14:46.85105392 +0000 UTC m=+0.120078658 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:14:46 compute-0 podman[260146]: 2025-12-01 23:14:46.883868287 +0000 UTC m=+0.133870693 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.648 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.649 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.649 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.650 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.650 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.652 189512 INFO nova.compute.manager [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Terminating instance#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.653 189512 DEBUG nova.compute.manager [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 23:14:48 compute-0 kernel: tapd040598e-3c (unregistering): left promiscuous mode
Dec  1 23:14:48 compute-0 NetworkManager[56278]: <info>  [1764630888.6970] device (tapd040598e-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 23:14:48 compute-0 ovn_controller[97770]: 2025-12-01T23:14:48Z|00176|binding|INFO|Releasing lport d040598e-3c6d-4c31-a052-e42d95473b17 from this chassis (sb_readonly=0)
Dec  1 23:14:48 compute-0 ovn_controller[97770]: 2025-12-01T23:14:48Z|00177|binding|INFO|Setting lport d040598e-3c6d-4c31-a052-e42d95473b17 down in Southbound
Dec  1 23:14:48 compute-0 ovn_controller[97770]: 2025-12-01T23:14:48Z|00178|binding|INFO|Removing iface tapd040598e-3c ovn-installed in OVS
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.717 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.720 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:48 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:48.725 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:8f:04 10.100.2.30'], port_security=['fa:16:3e:90:8f:04 10.100.2.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.30/16', 'neutron:device_id': '42680544-e423-4200-816c-a17b766a4339', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76005ead-26ac-4245-b45f-b052ffa2d506', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0bc498794944fb4bfd74d85d99d70b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1db1c83-5a48-462b-b1b5-4f849ee50fec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39384b3e-eb99-4e89-ab68-0d8f0f8766e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>], logical_port=d040598e-3c6d-4c31-a052-e42d95473b17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb9ca8f0e20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:14:48 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:48.726 106662 INFO neutron.agent.ovn.metadata.agent [-] Port d040598e-3c6d-4c31-a052-e42d95473b17 in datapath 76005ead-26ac-4245-b45f-b052ffa2d506 unbound from our chassis#033[00m
Dec  1 23:14:48 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:48.728 106662 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76005ead-26ac-4245-b45f-b052ffa2d506, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 23:14:48 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:48.729 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d032714f-d64b-4a09-a98d-4436bcc23077]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:48 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:48.730 106662 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506 namespace which is not needed anymore#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.738 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:48 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  1 23:14:48 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 50.218s CPU time.
Dec  1 23:14:48 compute-0 systemd-machined[155759]: Machine qemu-16-instance-0000000f terminated.
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.919 189512 INFO nova.virt.libvirt.driver [-] [instance: 42680544-e423-4200-816c-a17b766a4339] Instance destroyed successfully.#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.920 189512 DEBUG nova.objects.instance [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lazy-loading 'resources' on Instance uuid 42680544-e423-4200-816c-a17b766a4339 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.939 189512 DEBUG nova.virt.libvirt.vif [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T23:04:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7337297-asg-fmnosfr5uizj-etbbk2jse6ak-ox44jlb3kw3r',id=15,image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T23:04:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='3dac0f46-9f79-460b-b6c5-9876493d569a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0bc498794944fb4bfd74d85d99d70b2',ramdisk_id='',reservation_id='r-o1y4t3q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ca3539b1-f1c0-4505-ac0a-e6bb3f6ad793',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-2049243380',owner_user_name='tempest-PrometheusGabbiTest-2049243380-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T23:04:52Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='31117d25a4e94964a6d197de21b13cbe',uuid=42680544-e423-4200-816c-a17b766a4339,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.940 189512 DEBUG nova.network.os_vif_util [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converting VIF {"id": "d040598e-3c6d-4c31-a052-e42d95473b17", "address": "fa:16:3e:90:8f:04", "network": {"id": "76005ead-26ac-4245-b45f-b052ffa2d506", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0bc498794944fb4bfd74d85d99d70b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd040598e-3c", "ovs_interfaceid": "d040598e-3c6d-4c31-a052-e42d95473b17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.940 189512 DEBUG nova.network.os_vif_util [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:8f:04,bridge_name='br-int',has_traffic_filtering=True,id=d040598e-3c6d-4c31-a052-e42d95473b17,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd040598e-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.940 189512 DEBUG os_vif [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:8f:04,bridge_name='br-int',has_traffic_filtering=True,id=d040598e-3c6d-4c31-a052-e42d95473b17,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd040598e-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.942 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.942 189512 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd040598e-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.944 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.945 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.947 189512 INFO os_vif [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:8f:04,bridge_name='br-int',has_traffic_filtering=True,id=d040598e-3c6d-4c31-a052-e42d95473b17,network=Network(76005ead-26ac-4245-b45f-b052ffa2d506),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd040598e-3c')#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.948 189512 INFO nova.virt.libvirt.driver [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Deleting instance files /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339_del#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.949 189512 INFO nova.virt.libvirt.driver [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Deletion of /var/lib/nova/instances/42680544-e423-4200-816c-a17b766a4339_del complete#033[00m
Dec  1 23:14:48 compute-0 neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506[254675]: [NOTICE]   (254679) : haproxy version is 2.8.14-c23fe91
Dec  1 23:14:48 compute-0 neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506[254675]: [NOTICE]   (254679) : path to executable is /usr/sbin/haproxy
Dec  1 23:14:48 compute-0 neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506[254675]: [WARNING]  (254679) : Exiting Master process...
Dec  1 23:14:48 compute-0 neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506[254675]: [ALERT]    (254679) : Current worker (254681) exited with code 143 (Terminated)
Dec  1 23:14:48 compute-0 neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506[254675]: [WARNING]  (254679) : All workers exited. Exiting... (0)
Dec  1 23:14:48 compute-0 systemd[1]: libpod-022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f.scope: Deactivated successfully.
Dec  1 23:14:48 compute-0 podman[260251]: 2025-12-01 23:14:48.970092605 +0000 UTC m=+0.071659854 container died 022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.973 189512 DEBUG nova.compute.manager [req-a86f0273-693d-46b8-9bcc-df18146b116b req-0ae61fee-faea-447f-82a4-d7cf344e49d7 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received event network-vif-unplugged-d040598e-3c6d-4c31-a052-e42d95473b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.973 189512 DEBUG oslo_concurrency.lockutils [req-a86f0273-693d-46b8-9bcc-df18146b116b req-0ae61fee-faea-447f-82a4-d7cf344e49d7 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.974 189512 DEBUG oslo_concurrency.lockutils [req-a86f0273-693d-46b8-9bcc-df18146b116b req-0ae61fee-faea-447f-82a4-d7cf344e49d7 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.975 189512 DEBUG oslo_concurrency.lockutils [req-a86f0273-693d-46b8-9bcc-df18146b116b req-0ae61fee-faea-447f-82a4-d7cf344e49d7 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.976 189512 DEBUG nova.compute.manager [req-a86f0273-693d-46b8-9bcc-df18146b116b req-0ae61fee-faea-447f-82a4-d7cf344e49d7 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] No waiting events found dispatching network-vif-unplugged-d040598e-3c6d-4c31-a052-e42d95473b17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:14:48 compute-0 nova_compute[189508]: 2025-12-01 23:14:48.976 189512 DEBUG nova.compute.manager [req-a86f0273-693d-46b8-9bcc-df18146b116b req-0ae61fee-faea-447f-82a4-d7cf344e49d7 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received event network-vif-unplugged-d040598e-3c6d-4c31-a052-e42d95473b17 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 23:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f-userdata-shm.mount: Deactivated successfully.
Dec  1 23:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-71bd82104e90355b90eb760d8aceb7adf586baf4e6b9f39a20907ba78525fa25-merged.mount: Deactivated successfully.
Dec  1 23:14:49 compute-0 podman[260251]: 2025-12-01 23:14:49.023388575 +0000 UTC m=+0.124955824 container cleanup 022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 23:14:49 compute-0 nova_compute[189508]: 2025-12-01 23:14:49.030 189512 INFO nova.compute.manager [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 23:14:49 compute-0 nova_compute[189508]: 2025-12-01 23:14:49.031 189512 DEBUG oslo.service.loopingcall [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 23:14:49 compute-0 systemd[1]: libpod-conmon-022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f.scope: Deactivated successfully.
Dec  1 23:14:49 compute-0 nova_compute[189508]: 2025-12-01 23:14:49.032 189512 DEBUG nova.compute.manager [-] [instance: 42680544-e423-4200-816c-a17b766a4339] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 23:14:49 compute-0 nova_compute[189508]: 2025-12-01 23:14:49.033 189512 DEBUG nova.network.neutron [-] [instance: 42680544-e423-4200-816c-a17b766a4339] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 23:14:49 compute-0 podman[260294]: 2025-12-01 23:14:49.109552444 +0000 UTC m=+0.057427127 container remove 022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.119 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f36beb5b-55c5-4681-8598-529495403a12]: (4, ('Mon Dec  1 11:14:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506 (022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f)\n022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f\nMon Dec  1 11:14:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506 (022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f)\n022589dbf95b724f6d9ad41c3bee0afe9d07772bac003e97f87dec7a2f62283f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.121 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[828c75b9-8539-45c5-a34c-c8e50ab0da5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.122 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76005ead-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:14:49 compute-0 nova_compute[189508]: 2025-12-01 23:14:49.125 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:49 compute-0 kernel: tap76005ead-20: left promiscuous mode
Dec  1 23:14:49 compute-0 nova_compute[189508]: 2025-12-01 23:14:49.129 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.132 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[d26a15e0-0f3b-4fef-b80a-d56a2b083fba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:49 compute-0 nova_compute[189508]: 2025-12-01 23:14:49.155 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.166 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[f0259303-946b-456d-be31-9bbacbb679f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.168 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6580d5-beee-4faf-b224-0058b0f90d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.186 239973 DEBUG oslo.privsep.daemon [-] privsep: reply[4e41f307-e3d6-44d8-b3f2-a54d9076aeb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553331, 'reachable_time': 38732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260308, 'error': None, 'target': 'ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d76005ead\x2d26ac\x2d4245\x2db45f\x2db052ffa2d506.mount: Deactivated successfully.
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.189 106770 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76005ead-26ac-4245-b45f-b052ffa2d506 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.189 106770 DEBUG oslo.privsep.daemon [-] privsep: reply[388fb721-6d05-4f56-9b3f-e0fe8623e1e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 23:14:49 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:14:49.691 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.014 189512 DEBUG nova.network.neutron [-] [instance: 42680544-e423-4200-816c-a17b766a4339] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.047 189512 INFO nova.compute.manager [-] [instance: 42680544-e423-4200-816c-a17b766a4339] Took 2.01 seconds to deallocate network for instance.#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.125 189512 DEBUG nova.compute.manager [req-d1c4d6c4-b117-47d3-8f19-347671644537 req-25b9b3ec-9c56-48af-8467-54e2e7393a62 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received event network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.125 189512 DEBUG oslo_concurrency.lockutils [req-d1c4d6c4-b117-47d3-8f19-347671644537 req-25b9b3ec-9c56-48af-8467-54e2e7393a62 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Acquiring lock "42680544-e423-4200-816c-a17b766a4339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.125 189512 DEBUG oslo_concurrency.lockutils [req-d1c4d6c4-b117-47d3-8f19-347671644537 req-25b9b3ec-9c56-48af-8467-54e2e7393a62 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.126 189512 DEBUG oslo_concurrency.lockutils [req-d1c4d6c4-b117-47d3-8f19-347671644537 req-25b9b3ec-9c56-48af-8467-54e2e7393a62 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.126 189512 DEBUG nova.compute.manager [req-d1c4d6c4-b117-47d3-8f19-347671644537 req-25b9b3ec-9c56-48af-8467-54e2e7393a62 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] No waiting events found dispatching network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.126 189512 WARNING nova.compute.manager [req-d1c4d6c4-b117-47d3-8f19-347671644537 req-25b9b3ec-9c56-48af-8467-54e2e7393a62 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received unexpected event network-vif-plugged-d040598e-3c6d-4c31-a052-e42d95473b17 for instance with vm_state active and task_state deleting.#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.135 189512 DEBUG nova.compute.manager [req-216236d9-f570-4665-b693-29cf5cf4d417 req-0f2eff25-8a2c-4dd7-9094-aa7c88f1d6c5 c0bfd2ddedf44297a49aeb3fcaf1ea6c 7e49dd7fa2c145e79f690d43313166a3 - - default default] [instance: 42680544-e423-4200-816c-a17b766a4339] Received event network-vif-deleted-d040598e-3c6d-4c31-a052-e42d95473b17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.136 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.139 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.139 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.199 189512 DEBUG nova.compute.provider_tree [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.223 189512 DEBUG nova.scheduler.client.report [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.257 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.284 189512 INFO nova.scheduler.client.report [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Deleted allocations for instance 42680544-e423-4200-816c-a17b766a4339#033[00m
Dec  1 23:14:51 compute-0 nova_compute[189508]: 2025-12-01 23:14:51.382 189512 DEBUG oslo_concurrency.lockutils [None req-b4f25ce9-37ef-4b3b-ab04-c767a42ff70d 31117d25a4e94964a6d197de21b13cbe a0bc498794944fb4bfd74d85d99d70b2 - - default default] Lock "42680544-e423-4200-816c-a17b766a4339" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:14:53 compute-0 nova_compute[189508]: 2025-12-01 23:14:53.945 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:55 compute-0 nova_compute[189508]: 2025-12-01 23:14:55.303 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764630880.3017726, 91dfa889-2ab6-4683-bc07-870d2df30bdd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:14:55 compute-0 nova_compute[189508]: 2025-12-01 23:14:55.304 189512 INFO nova.compute.manager [-] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] VM Stopped (Lifecycle Event)#033[00m
Dec  1 23:14:55 compute-0 nova_compute[189508]: 2025-12-01 23:14:55.340 189512 DEBUG nova.compute.manager [None req-c52e6501-cc0a-452c-82a7-14342cf2b9b7 - - - - - -] [instance: 91dfa889-2ab6-4683-bc07-870d2df30bdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:14:56 compute-0 nova_compute[189508]: 2025-12-01 23:14:56.138 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:58 compute-0 nova_compute[189508]: 2025-12-01 23:14:58.949 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:14:59 compute-0 podman[203693]: time="2025-12-01T23:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:14:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:14:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Dec  1 23:15:01 compute-0 nova_compute[189508]: 2025-12-01 23:15:01.142 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:01 compute-0 openstack_network_exporter[205887]: ERROR   23:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:15:01 compute-0 openstack_network_exporter[205887]: ERROR   23:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:15:01 compute-0 openstack_network_exporter[205887]: ERROR   23:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:15:01 compute-0 openstack_network_exporter[205887]: ERROR   23:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:15:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:15:01 compute-0 openstack_network_exporter[205887]: ERROR   23:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:15:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:15:01 compute-0 podman[260310]: 2025-12-01 23:15:01.852070144 +0000 UTC m=+0.114376339 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:15:03 compute-0 podman[260334]: 2025-12-01 23:15:03.83305177 +0000 UTC m=+0.109919184 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 23:15:03 compute-0 podman[260333]: 2025-12-01 23:15:03.851026222 +0000 UTC m=+0.118019870 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible)
Dec  1 23:15:03 compute-0 nova_compute[189508]: 2025-12-01 23:15:03.915 189512 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764630888.9145625, 42680544-e423-4200-816c-a17b766a4339 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 23:15:03 compute-0 nova_compute[189508]: 2025-12-01 23:15:03.916 189512 INFO nova.compute.manager [-] [instance: 42680544-e423-4200-816c-a17b766a4339] VM Stopped (Lifecycle Event)#033[00m
Dec  1 23:15:03 compute-0 nova_compute[189508]: 2025-12-01 23:15:03.938 189512 DEBUG nova.compute.manager [None req-5761d51c-7700-47a8-aa64-19cc46bf24ba - - - - - -] [instance: 42680544-e423-4200-816c-a17b766a4339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 23:15:03 compute-0 nova_compute[189508]: 2025-12-01 23:15:03.952 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:04 compute-0 nova_compute[189508]: 2025-12-01 23:15:04.451 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:15:04.664 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:15:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:15:04.664 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:15:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:15:04.665 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:15:04 compute-0 nova_compute[189508]: 2025-12-01 23:15:04.680 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:06 compute-0 nova_compute[189508]: 2025-12-01 23:15:06.144 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:06 compute-0 nova_compute[189508]: 2025-12-01 23:15:06.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:07 compute-0 nova_compute[189508]: 2025-12-01 23:15:07.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:08 compute-0 nova_compute[189508]: 2025-12-01 23:15:08.955 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:09 compute-0 podman[260374]: 2025-12-01 23:15:09.863118771 +0000 UTC m=+0.121420725 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 23:15:09 compute-0 podman[260373]: 2025-12-01 23:15:09.921884894 +0000 UTC m=+0.186243678 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 23:15:10 compute-0 nova_compute[189508]: 2025-12-01 23:15:10.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:10 compute-0 nova_compute[189508]: 2025-12-01 23:15:10.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:10 compute-0 nova_compute[189508]: 2025-12-01 23:15:10.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:10 compute-0 nova_compute[189508]: 2025-12-01 23:15:10.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:15:11 compute-0 nova_compute[189508]: 2025-12-01 23:15:11.146 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:12 compute-0 nova_compute[189508]: 2025-12-01 23:15:12.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:13 compute-0 nova_compute[189508]: 2025-12-01 23:15:13.959 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:14 compute-0 nova_compute[189508]: 2025-12-01 23:15:14.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:14 compute-0 nova_compute[189508]: 2025-12-01 23:15:14.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:15:14 compute-0 nova_compute[189508]: 2025-12-01 23:15:14.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:15:14 compute-0 nova_compute[189508]: 2025-12-01 23:15:14.214 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:15:14 compute-0 nova_compute[189508]: 2025-12-01 23:15:14.214 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.243 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.244 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.244 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.245 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.681 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.683 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5368MB free_disk=72.12346267700195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.683 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.684 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.750 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.751 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.785 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.803 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.824 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:15:15 compute-0 nova_compute[189508]: 2025-12-01 23:15:15.825 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:15:16 compute-0 nova_compute[189508]: 2025-12-01 23:15:16.149 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:17 compute-0 podman[260422]: 2025-12-01 23:15:17.853421652 +0000 UTC m=+0.116167459 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, version=9.4, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:15:17 compute-0 podman[260420]: 2025-12-01 23:15:17.855895041 +0000 UTC m=+0.114311367 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:15:17 compute-0 podman[260419]: 2025-12-01 23:15:17.8608728 +0000 UTC m=+0.123653608 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:15:17 compute-0 podman[260421]: 2025-12-01 23:15:17.872205097 +0000 UTC m=+0.136857997 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:15:18 compute-0 nova_compute[189508]: 2025-12-01 23:15:18.962 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:21 compute-0 nova_compute[189508]: 2025-12-01 23:15:21.151 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:23 compute-0 nova_compute[189508]: 2025-12-01 23:15:23.966 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:26 compute-0 nova_compute[189508]: 2025-12-01 23:15:26.153 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:28 compute-0 nova_compute[189508]: 2025-12-01 23:15:28.970 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:29 compute-0 podman[203693]: time="2025-12-01T23:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:15:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:15:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Dec  1 23:15:31 compute-0 nova_compute[189508]: 2025-12-01 23:15:31.156 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:31 compute-0 openstack_network_exporter[205887]: ERROR   23:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:15:31 compute-0 openstack_network_exporter[205887]: ERROR   23:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:15:31 compute-0 openstack_network_exporter[205887]: ERROR   23:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:15:31 compute-0 openstack_network_exporter[205887]: ERROR   23:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:15:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:15:31 compute-0 openstack_network_exporter[205887]: ERROR   23:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:15:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:15:32 compute-0 podman[260498]: 2025-12-01 23:15:32.838034097 +0000 UTC m=+0.104877953 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:15:33 compute-0 nova_compute[189508]: 2025-12-01 23:15:33.974 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:34 compute-0 podman[260520]: 2025-12-01 23:15:34.848921568 +0000 UTC m=+0.116667093 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:15:34 compute-0 podman[260521]: 2025-12-01 23:15:34.886689813 +0000 UTC m=+0.148181533 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.281 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.281 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.282 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.285 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.286 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.318 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:15:35.322 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:15:36 compute-0 nova_compute[189508]: 2025-12-01 23:15:36.161 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:38 compute-0 nova_compute[189508]: 2025-12-01 23:15:38.978 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:40 compute-0 podman[260560]: 2025-12-01 23:15:40.841579706 +0000 UTC m=+0.111376395 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:15:40 compute-0 podman[260559]: 2025-12-01 23:15:40.878524617 +0000 UTC m=+0.142505273 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:15:41 compute-0 nova_compute[189508]: 2025-12-01 23:15:41.161 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:43 compute-0 nova_compute[189508]: 2025-12-01 23:15:43.984 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:44 compute-0 ovn_controller[97770]: 2025-12-01T23:15:44Z|00179|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  1 23:15:46 compute-0 nova_compute[189508]: 2025-12-01 23:15:46.164 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:48 compute-0 podman[260606]: 2025-12-01 23:15:48.800380974 +0000 UTC m=+0.069972647 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, release=1214.1726694543)
Dec  1 23:15:48 compute-0 podman[260603]: 2025-12-01 23:15:48.805648451 +0000 UTC m=+0.090084189 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:15:48 compute-0 podman[260605]: 2025-12-01 23:15:48.809788327 +0000 UTC m=+0.073608089 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 23:15:48 compute-0 podman[260604]: 2025-12-01 23:15:48.836054771 +0000 UTC m=+0.108206676 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 23:15:48 compute-0 nova_compute[189508]: 2025-12-01 23:15:48.988 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:51 compute-0 nova_compute[189508]: 2025-12-01 23:15:51.168 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:53 compute-0 nova_compute[189508]: 2025-12-01 23:15:53.992 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:56 compute-0 nova_compute[189508]: 2025-12-01 23:15:56.168 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:58 compute-0 nova_compute[189508]: 2025-12-01 23:15:58.995 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:15:59 compute-0 podman[203693]: time="2025-12-01T23:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:15:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:15:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Dec  1 23:16:00 compute-0 nova_compute[189508]: 2025-12-01 23:16:00.820 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:01 compute-0 nova_compute[189508]: 2025-12-01 23:16:01.171 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:01 compute-0 openstack_network_exporter[205887]: ERROR   23:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:16:01 compute-0 openstack_network_exporter[205887]: ERROR   23:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:16:01 compute-0 openstack_network_exporter[205887]: ERROR   23:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:16:01 compute-0 openstack_network_exporter[205887]: ERROR   23:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:16:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:16:01 compute-0 openstack_network_exporter[205887]: ERROR   23:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:16:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:16:03 compute-0 podman[260682]: 2025-12-01 23:16:03.866944851 +0000 UTC m=+0.135517069 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:16:04 compute-0 nova_compute[189508]: 2025-12-01 23:16:03.999 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:16:04.664 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:16:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:16:04.664 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:16:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:16:04.665 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:16:05 compute-0 podman[260706]: 2025-12-01 23:16:05.835042697 +0000 UTC m=+0.112170826 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:16:05 compute-0 podman[260707]: 2025-12-01 23:16:05.861569859 +0000 UTC m=+0.131871858 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 23:16:06 compute-0 nova_compute[189508]: 2025-12-01 23:16:06.172 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:06 compute-0 nova_compute[189508]: 2025-12-01 23:16:06.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:09 compute-0 nova_compute[189508]: 2025-12-01 23:16:09.002 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:10 compute-0 nova_compute[189508]: 2025-12-01 23:16:10.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:10 compute-0 nova_compute[189508]: 2025-12-01 23:16:10.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:10 compute-0 nova_compute[189508]: 2025-12-01 23:16:10.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:16:11 compute-0 nova_compute[189508]: 2025-12-01 23:16:11.174 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:11 compute-0 nova_compute[189508]: 2025-12-01 23:16:11.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:11 compute-0 podman[260746]: 2025-12-01 23:16:11.843883677 +0000 UTC m=+0.123805712 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 23:16:11 compute-0 podman[260747]: 2025-12-01 23:16:11.858154206 +0000 UTC m=+0.134017258 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:16:13 compute-0 nova_compute[189508]: 2025-12-01 23:16:13.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:14 compute-0 nova_compute[189508]: 2025-12-01 23:16:14.007 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:14 compute-0 nova_compute[189508]: 2025-12-01 23:16:14.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:14 compute-0 nova_compute[189508]: 2025-12-01 23:16:14.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:16:14 compute-0 nova_compute[189508]: 2025-12-01 23:16:14.202 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:16:14 compute-0 nova_compute[189508]: 2025-12-01 23:16:14.244 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:16:15 compute-0 systemd-logind[788]: New session 31 of user zuul.
Dec  1 23:16:15 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.240 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.241 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.241 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.241 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.688 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.691 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5360MB free_disk=72.12346267700195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.692 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.693 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.785 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.786 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.814 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.833 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.836 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:16:15 compute-0 nova_compute[189508]: 2025-12-01 23:16:15.836 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:16:16 compute-0 nova_compute[189508]: 2025-12-01 23:16:16.178 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:16 compute-0 nova_compute[189508]: 2025-12-01 23:16:16.836 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:16:19 compute-0 nova_compute[189508]: 2025-12-01 23:16:19.012 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:19 compute-0 podman[260937]: 2025-12-01 23:16:19.854032941 +0000 UTC m=+0.115080168 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:16:19 compute-0 podman[260939]: 2025-12-01 23:16:19.862786756 +0000 UTC m=+0.111996432 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public)
Dec  1 23:16:19 compute-0 podman[260940]: 2025-12-01 23:16:19.869637657 +0000 UTC m=+0.125161600 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, container_name=kepler, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30)
Dec  1 23:16:19 compute-0 podman[260938]: 2025-12-01 23:16:19.88800005 +0000 UTC m=+0.144561272 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:16:21 compute-0 nova_compute[189508]: 2025-12-01 23:16:21.183 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:21 compute-0 ovs-vsctl[261045]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  1 23:16:22 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 260815 (sos)
Dec  1 23:16:22 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  1 23:16:22 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  1 23:16:22 compute-0 virtqemud[189130]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  1 23:16:22 compute-0 virtqemud[189130]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  1 23:16:23 compute-0 virtqemud[189130]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 23:16:24 compute-0 nova_compute[189508]: 2025-12-01 23:16:24.016 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:26 compute-0 nova_compute[189508]: 2025-12-01 23:16:26.182 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:26 compute-0 systemd[1]: Starting Hostname Service...
Dec  1 23:16:26 compute-0 systemd[1]: Started Hostname Service.
Dec  1 23:16:29 compute-0 nova_compute[189508]: 2025-12-01 23:16:29.019 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:29 compute-0 podman[203693]: time="2025-12-01T23:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:16:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:16:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4343 "" "Go-http-client/1.1"
Dec  1 23:16:31 compute-0 nova_compute[189508]: 2025-12-01 23:16:31.186 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:31 compute-0 openstack_network_exporter[205887]: ERROR   23:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:16:31 compute-0 openstack_network_exporter[205887]: ERROR   23:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:16:31 compute-0 openstack_network_exporter[205887]: ERROR   23:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:16:31 compute-0 openstack_network_exporter[205887]: ERROR   23:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:16:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:16:31 compute-0 openstack_network_exporter[205887]: ERROR   23:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:16:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:16:34 compute-0 nova_compute[189508]: 2025-12-01 23:16:34.024 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:34 compute-0 podman[262516]: 2025-12-01 23:16:34.481876835 +0000 UTC m=+0.097834536 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:16:35 compute-0 ovs-appctl[262858]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  1 23:16:35 compute-0 ovs-appctl[262863]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  1 23:16:35 compute-0 ovs-appctl[262867]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  1 23:16:36 compute-0 nova_compute[189508]: 2025-12-01 23:16:36.186 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:36 compute-0 podman[263070]: 2025-12-01 23:16:36.39414708 +0000 UTC m=+0.068281400 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  1 23:16:36 compute-0 podman[263074]: 2025-12-01 23:16:36.414060086 +0000 UTC m=+0.080370157 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 23:16:39 compute-0 nova_compute[189508]: 2025-12-01 23:16:39.027 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:41 compute-0 nova_compute[189508]: 2025-12-01 23:16:41.188 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:42 compute-0 podman[263879]: 2025-12-01 23:16:42.493346515 +0000 UTC m=+0.070597455 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:16:42 compute-0 podman[263876]: 2025-12-01 23:16:42.536991705 +0000 UTC m=+0.113874965 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 23:16:44 compute-0 nova_compute[189508]: 2025-12-01 23:16:44.032 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:44 compute-0 virtqemud[189130]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 23:16:45 compute-0 systemd[1]: Starting Time & Date Service...
Dec  1 23:16:45 compute-0 systemd[1]: Started Time & Date Service.
Dec  1 23:16:46 compute-0 nova_compute[189508]: 2025-12-01 23:16:46.191 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:49 compute-0 nova_compute[189508]: 2025-12-01 23:16:49.035 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:49 compute-0 podman[264367]: 2025-12-01 23:16:49.998093331 +0000 UTC m=+0.090768719 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:16:50 compute-0 podman[264368]: 2025-12-01 23:16:50.012063621 +0000 UTC m=+0.096146038 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Dec  1 23:16:50 compute-0 podman[264372]: 2025-12-01 23:16:50.02383282 +0000 UTC m=+0.087620650 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 23:16:50 compute-0 podman[264369]: 2025-12-01 23:16:50.048239462 +0000 UTC m=+0.131230169 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Dec  1 23:16:51 compute-0 nova_compute[189508]: 2025-12-01 23:16:51.195 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:54 compute-0 nova_compute[189508]: 2025-12-01 23:16:54.039 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:56 compute-0 nova_compute[189508]: 2025-12-01 23:16:56.197 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:59 compute-0 nova_compute[189508]: 2025-12-01 23:16:59.042 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:16:59 compute-0 podman[203693]: time="2025-12-01T23:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:16:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:16:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4347 "" "Go-http-client/1.1"
Dec  1 23:17:01 compute-0 nova_compute[189508]: 2025-12-01 23:17:01.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:01 compute-0 nova_compute[189508]: 2025-12-01 23:17:01.198 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:01 compute-0 openstack_network_exporter[205887]: ERROR   23:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:17:01 compute-0 openstack_network_exporter[205887]: ERROR   23:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:17:01 compute-0 openstack_network_exporter[205887]: ERROR   23:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:17:01 compute-0 openstack_network_exporter[205887]: ERROR   23:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:17:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:17:01 compute-0 openstack_network_exporter[205887]: ERROR   23:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:17:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:17:04 compute-0 nova_compute[189508]: 2025-12-01 23:17:04.046 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:17:04.666 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:17:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:17:04.666 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:17:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:17:04.667 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:17:04 compute-0 podman[264445]: 2025-12-01 23:17:04.849217536 +0000 UTC m=+0.113456162 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 23:17:06 compute-0 nova_compute[189508]: 2025-12-01 23:17:06.200 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:06 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec  1 23:17:06 compute-0 systemd[1]: session-31.scope: Consumed 1min 33.728s CPU time, 671.5M memory peak, read 279.8M from disk, written 29.9M to disk.
Dec  1 23:17:06 compute-0 systemd-logind[788]: Session 31 logged out. Waiting for processes to exit.
Dec  1 23:17:06 compute-0 systemd-logind[788]: Removed session 31.
Dec  1 23:17:06 compute-0 podman[264467]: 2025-12-01 23:17:06.821858258 +0000 UTC m=+0.101450676 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 23:17:06 compute-0 podman[264468]: 2025-12-01 23:17:06.855232341 +0000 UTC m=+0.111266690 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:17:06 compute-0 systemd-logind[788]: New session 32 of user zuul.
Dec  1 23:17:06 compute-0 systemd[1]: Started Session 32 of User zuul.
Dec  1 23:17:07 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Dec  1 23:17:07 compute-0 systemd-logind[788]: Session 32 logged out. Waiting for processes to exit.
Dec  1 23:17:07 compute-0 systemd-logind[788]: Removed session 32.
Dec  1 23:17:07 compute-0 systemd-logind[788]: New session 33 of user zuul.
Dec  1 23:17:07 compute-0 systemd[1]: Started Session 33 of User zuul.
Dec  1 23:17:07 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Dec  1 23:17:07 compute-0 systemd-logind[788]: Session 33 logged out. Waiting for processes to exit.
Dec  1 23:17:07 compute-0 systemd-logind[788]: Removed session 33.
Dec  1 23:17:08 compute-0 nova_compute[189508]: 2025-12-01 23:17:08.193 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:08 compute-0 nova_compute[189508]: 2025-12-01 23:17:08.226 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:09 compute-0 nova_compute[189508]: 2025-12-01 23:17:09.050 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:10 compute-0 nova_compute[189508]: 2025-12-01 23:17:10.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:11 compute-0 nova_compute[189508]: 2025-12-01 23:17:11.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:11 compute-0 nova_compute[189508]: 2025-12-01 23:17:11.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:17:11 compute-0 nova_compute[189508]: 2025-12-01 23:17:11.205 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:12 compute-0 nova_compute[189508]: 2025-12-01 23:17:12.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:12 compute-0 podman[264563]: 2025-12-01 23:17:12.82112725 +0000 UTC m=+0.101630562 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 23:17:12 compute-0 podman[264562]: 2025-12-01 23:17:12.861675774 +0000 UTC m=+0.146447085 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:17:14 compute-0 nova_compute[189508]: 2025-12-01 23:17:14.053 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:14 compute-0 nova_compute[189508]: 2025-12-01 23:17:14.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:14 compute-0 nova_compute[189508]: 2025-12-01 23:17:14.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:17:14 compute-0 nova_compute[189508]: 2025-12-01 23:17:14.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:17:14 compute-0 nova_compute[189508]: 2025-12-01 23:17:14.231 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:17:14 compute-0 nova_compute[189508]: 2025-12-01 23:17:14.231 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.264 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.264 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.265 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.265 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:17:15 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.725 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.726 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5266MB free_disk=72.12293243408203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.726 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.727 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:17:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.849 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.849 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.876 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.900 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.902 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:17:15 compute-0 nova_compute[189508]: 2025-12-01 23:17:15.903 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:17:16 compute-0 nova_compute[189508]: 2025-12-01 23:17:16.208 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:18 compute-0 nova_compute[189508]: 2025-12-01 23:17:18.903 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:17:19 compute-0 nova_compute[189508]: 2025-12-01 23:17:19.058 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:20 compute-0 podman[264609]: 2025-12-01 23:17:20.844027239 +0000 UTC m=+0.102624069 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, version=9.6, distribution-scope=public)
Dec  1 23:17:20 compute-0 podman[264610]: 2025-12-01 23:17:20.844514503 +0000 UTC m=+0.100664025 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Dec  1 23:17:20 compute-0 podman[264607]: 2025-12-01 23:17:20.850384067 +0000 UTC m=+0.120142559 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 23:17:20 compute-0 podman[264608]: 2025-12-01 23:17:20.859769019 +0000 UTC m=+0.114040108 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec  1 23:17:21 compute-0 nova_compute[189508]: 2025-12-01 23:17:21.210 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:24 compute-0 nova_compute[189508]: 2025-12-01 23:17:24.062 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:26 compute-0 nova_compute[189508]: 2025-12-01 23:17:26.214 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:29 compute-0 nova_compute[189508]: 2025-12-01 23:17:29.066 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:29 compute-0 podman[203693]: time="2025-12-01T23:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:17:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:17:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Dec  1 23:17:31 compute-0 nova_compute[189508]: 2025-12-01 23:17:31.215 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:31 compute-0 openstack_network_exporter[205887]: ERROR   23:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:17:31 compute-0 openstack_network_exporter[205887]: ERROR   23:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:17:31 compute-0 openstack_network_exporter[205887]: ERROR   23:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:17:31 compute-0 openstack_network_exporter[205887]: ERROR   23:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:17:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:17:31 compute-0 openstack_network_exporter[205887]: ERROR   23:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:17:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:17:34 compute-0 nova_compute[189508]: 2025-12-01 23:17:34.071 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.281 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.283 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.283 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.allocation': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:17:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:17:35 compute-0 podman[264686]: 2025-12-01 23:17:35.806029493 +0000 UTC m=+0.084852993 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:17:36 compute-0 nova_compute[189508]: 2025-12-01 23:17:36.218 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:37 compute-0 podman[264711]: 2025-12-01 23:17:37.830865985 +0000 UTC m=+0.096652913 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute)
Dec  1 23:17:37 compute-0 podman[264710]: 2025-12-01 23:17:37.838725535 +0000 UTC m=+0.108270878 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  1 23:17:39 compute-0 nova_compute[189508]: 2025-12-01 23:17:39.075 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:41 compute-0 nova_compute[189508]: 2025-12-01 23:17:41.224 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:43 compute-0 podman[264747]: 2025-12-01 23:17:43.836107094 +0000 UTC m=+0.106021185 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  1 23:17:43 compute-0 podman[264746]: 2025-12-01 23:17:43.935461411 +0000 UTC m=+0.203691565 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  1 23:17:44 compute-0 nova_compute[189508]: 2025-12-01 23:17:44.078 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:46 compute-0 nova_compute[189508]: 2025-12-01 23:17:46.228 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:49 compute-0 nova_compute[189508]: 2025-12-01 23:17:49.082 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:51 compute-0 nova_compute[189508]: 2025-12-01 23:17:51.231 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:51 compute-0 podman[264792]: 2025-12-01 23:17:51.847700087 +0000 UTC m=+0.114774709 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:17:51 compute-0 podman[264794]: 2025-12-01 23:17:51.861834412 +0000 UTC m=+0.115043946 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, architecture=x86_64, release=1755695350, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 23:17:51 compute-0 podman[264793]: 2025-12-01 23:17:51.862677816 +0000 UTC m=+0.124219403 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 23:17:51 compute-0 podman[264795]: 2025-12-01 23:17:51.876948305 +0000 UTC m=+0.123236476 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, config_id=edpm, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc.)
Dec  1 23:17:54 compute-0 nova_compute[189508]: 2025-12-01 23:17:54.088 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:56 compute-0 nova_compute[189508]: 2025-12-01 23:17:56.234 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:59 compute-0 nova_compute[189508]: 2025-12-01 23:17:59.092 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:17:59 compute-0 podman[203693]: time="2025-12-01T23:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:17:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:17:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Dec  1 23:18:01 compute-0 nova_compute[189508]: 2025-12-01 23:18:01.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:01 compute-0 nova_compute[189508]: 2025-12-01 23:18:01.238 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:01 compute-0 openstack_network_exporter[205887]: ERROR   23:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:18:01 compute-0 openstack_network_exporter[205887]: ERROR   23:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:18:01 compute-0 openstack_network_exporter[205887]: ERROR   23:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:18:01 compute-0 openstack_network_exporter[205887]: ERROR   23:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:18:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:18:01 compute-0 openstack_network_exporter[205887]: ERROR   23:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:18:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:18:04 compute-0 nova_compute[189508]: 2025-12-01 23:18:04.096 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:18:04.667 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:18:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:18:04.668 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:18:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:18:04.668 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:18:06 compute-0 nova_compute[189508]: 2025-12-01 23:18:06.241 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:06 compute-0 podman[264869]: 2025-12-01 23:18:06.791883442 +0000 UTC m=+0.071356556 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 23:18:08 compute-0 nova_compute[189508]: 2025-12-01 23:18:08.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:08 compute-0 podman[264894]: 2025-12-01 23:18:08.795364516 +0000 UTC m=+0.074455202 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  1 23:18:08 compute-0 podman[264893]: 2025-12-01 23:18:08.81840402 +0000 UTC m=+0.102544747 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 23:18:09 compute-0 nova_compute[189508]: 2025-12-01 23:18:09.099 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:11 compute-0 nova_compute[189508]: 2025-12-01 23:18:11.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:11 compute-0 nova_compute[189508]: 2025-12-01 23:18:11.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:18:11 compute-0 nova_compute[189508]: 2025-12-01 23:18:11.243 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:12 compute-0 nova_compute[189508]: 2025-12-01 23:18:12.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:14 compute-0 nova_compute[189508]: 2025-12-01 23:18:14.102 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:14 compute-0 nova_compute[189508]: 2025-12-01 23:18:14.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:14 compute-0 podman[264931]: 2025-12-01 23:18:14.823881817 +0000 UTC m=+0.120086287 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:18:14 compute-0 podman[264930]: 2025-12-01 23:18:14.870759958 +0000 UTC m=+0.165166577 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.233 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.233 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.233 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.233 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.637 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.637 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5316MB free_disk=72.12319564819336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.638 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.638 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.740 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.740 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.768 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.793 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.794 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:18:15 compute-0 nova_compute[189508]: 2025-12-01 23:18:15.795 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:18:16 compute-0 nova_compute[189508]: 2025-12-01 23:18:16.247 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:16 compute-0 nova_compute[189508]: 2025-12-01 23:18:16.795 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:16 compute-0 nova_compute[189508]: 2025-12-01 23:18:16.796 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:18:16 compute-0 nova_compute[189508]: 2025-12-01 23:18:16.797 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:18:16 compute-0 nova_compute[189508]: 2025-12-01 23:18:16.814 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:18:19 compute-0 nova_compute[189508]: 2025-12-01 23:18:19.106 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:19 compute-0 nova_compute[189508]: 2025-12-01 23:18:19.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:21 compute-0 nova_compute[189508]: 2025-12-01 23:18:21.252 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:22 compute-0 podman[264977]: 2025-12-01 23:18:22.845533883 +0000 UTC m=+0.110274744 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:18:22 compute-0 podman[264980]: 2025-12-01 23:18:22.84794661 +0000 UTC m=+0.090401238 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git)
Dec  1 23:18:22 compute-0 podman[264979]: 2025-12-01 23:18:22.862045505 +0000 UTC m=+0.115378417 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  1 23:18:22 compute-0 podman[264978]: 2025-12-01 23:18:22.882357302 +0000 UTC m=+0.143463281 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:18:24 compute-0 nova_compute[189508]: 2025-12-01 23:18:24.110 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:26 compute-0 nova_compute[189508]: 2025-12-01 23:18:26.253 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:29 compute-0 nova_compute[189508]: 2025-12-01 23:18:29.113 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:29 compute-0 podman[203693]: time="2025-12-01T23:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:18:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:18:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4347 "" "Go-http-client/1.1"
Dec  1 23:18:31 compute-0 nova_compute[189508]: 2025-12-01 23:18:31.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:31 compute-0 nova_compute[189508]: 2025-12-01 23:18:31.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 23:18:31 compute-0 nova_compute[189508]: 2025-12-01 23:18:31.219 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 23:18:31 compute-0 nova_compute[189508]: 2025-12-01 23:18:31.255 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:31 compute-0 openstack_network_exporter[205887]: ERROR   23:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:18:31 compute-0 openstack_network_exporter[205887]: ERROR   23:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:18:31 compute-0 openstack_network_exporter[205887]: ERROR   23:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:18:31 compute-0 openstack_network_exporter[205887]: ERROR   23:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:18:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:18:31 compute-0 openstack_network_exporter[205887]: ERROR   23:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:18:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:18:34 compute-0 nova_compute[189508]: 2025-12-01 23:18:34.116 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:36 compute-0 nova_compute[189508]: 2025-12-01 23:18:36.256 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:37 compute-0 podman[265057]: 2025-12-01 23:18:37.799920973 +0000 UTC m=+0.078114344 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:18:39 compute-0 nova_compute[189508]: 2025-12-01 23:18:39.118 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:39 compute-0 nova_compute[189508]: 2025-12-01 23:18:39.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:39 compute-0 podman[265082]: 2025-12-01 23:18:39.789532051 +0000 UTC m=+0.063943669 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Dec  1 23:18:39 compute-0 podman[265081]: 2025-12-01 23:18:39.808934563 +0000 UTC m=+0.088571867 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  1 23:18:41 compute-0 nova_compute[189508]: 2025-12-01 23:18:41.248 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:18:41 compute-0 nova_compute[189508]: 2025-12-01 23:18:41.249 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 23:18:41 compute-0 nova_compute[189508]: 2025-12-01 23:18:41.261 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:44 compute-0 nova_compute[189508]: 2025-12-01 23:18:44.121 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:45 compute-0 podman[265120]: 2025-12-01 23:18:45.83050517 +0000 UTC m=+0.098118274 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  1 23:18:45 compute-0 podman[265119]: 2025-12-01 23:18:45.878180443 +0000 UTC m=+0.150984562 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:18:46 compute-0 nova_compute[189508]: 2025-12-01 23:18:46.264 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:49 compute-0 nova_compute[189508]: 2025-12-01 23:18:49.124 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:51 compute-0 nova_compute[189508]: 2025-12-01 23:18:51.268 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:53 compute-0 podman[265164]: 2025-12-01 23:18:53.792648032 +0000 UTC m=+0.072986622 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:18:53 compute-0 podman[265165]: 2025-12-01 23:18:53.829540073 +0000 UTC m=+0.097168607 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  1 23:18:53 compute-0 podman[265167]: 2025-12-01 23:18:53.838458322 +0000 UTC m=+0.108083532 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, config_id=edpm, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, container_name=openstack_network_exporter)
Dec  1 23:18:53 compute-0 podman[265172]: 2025-12-01 23:18:53.850353295 +0000 UTC m=+0.104972376 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4)
Dec  1 23:18:54 compute-0 nova_compute[189508]: 2025-12-01 23:18:54.128 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:56 compute-0 nova_compute[189508]: 2025-12-01 23:18:56.271 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:59 compute-0 nova_compute[189508]: 2025-12-01 23:18:59.131 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:18:59 compute-0 podman[203693]: time="2025-12-01T23:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:18:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:18:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Dec  1 23:19:01 compute-0 nova_compute[189508]: 2025-12-01 23:19:01.234 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:01 compute-0 nova_compute[189508]: 2025-12-01 23:19:01.274 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:01 compute-0 openstack_network_exporter[205887]: ERROR   23:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:19:01 compute-0 openstack_network_exporter[205887]: ERROR   23:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:19:01 compute-0 openstack_network_exporter[205887]: ERROR   23:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:19:01 compute-0 openstack_network_exporter[205887]: ERROR   23:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:19:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:19:01 compute-0 openstack_network_exporter[205887]: ERROR   23:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:19:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:19:04 compute-0 nova_compute[189508]: 2025-12-01 23:19:04.135 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:19:04.669 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:19:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:19:04.669 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:19:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:19:04.670 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:19:06 compute-0 nova_compute[189508]: 2025-12-01 23:19:06.278 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:08 compute-0 podman[265241]: 2025-12-01 23:19:08.808990854 +0000 UTC m=+0.097388503 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:19:09 compute-0 nova_compute[189508]: 2025-12-01 23:19:09.138 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:09 compute-0 nova_compute[189508]: 2025-12-01 23:19:09.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:10 compute-0 podman[265265]: 2025-12-01 23:19:10.808316634 +0000 UTC m=+0.085132621 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  1 23:19:10 compute-0 podman[265266]: 2025-12-01 23:19:10.840276087 +0000 UTC m=+0.112103335 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 23:19:11 compute-0 nova_compute[189508]: 2025-12-01 23:19:11.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:11 compute-0 nova_compute[189508]: 2025-12-01 23:19:11.278 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:13 compute-0 nova_compute[189508]: 2025-12-01 23:19:13.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:13 compute-0 nova_compute[189508]: 2025-12-01 23:19:13.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:19:14 compute-0 nova_compute[189508]: 2025-12-01 23:19:14.141 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:14 compute-0 nova_compute[189508]: 2025-12-01 23:19:14.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:14 compute-0 nova_compute[189508]: 2025-12-01 23:19:14.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.238 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.238 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.239 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.281 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:16 compute-0 podman[265303]: 2025-12-01 23:19:16.39064275 +0000 UTC m=+0.138911104 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 23:19:16 compute-0 podman[265302]: 2025-12-01 23:19:16.428744035 +0000 UTC m=+0.184894569 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.664 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.666 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5307MB free_disk=72.12313461303711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.667 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.667 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.959 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:19:16 compute-0 nova_compute[189508]: 2025-12-01 23:19:16.960 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.082 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.227 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.228 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.253 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.292 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.321 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.344 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.348 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:19:17 compute-0 nova_compute[189508]: 2025-12-01 23:19:17.348 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:19:19 compute-0 nova_compute[189508]: 2025-12-01 23:19:19.145 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:19 compute-0 nova_compute[189508]: 2025-12-01 23:19:19.349 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:19 compute-0 nova_compute[189508]: 2025-12-01 23:19:19.350 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:19:19 compute-0 nova_compute[189508]: 2025-12-01 23:19:19.350 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:19:19 compute-0 nova_compute[189508]: 2025-12-01 23:19:19.379 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:19:19 compute-0 nova_compute[189508]: 2025-12-01 23:19:19.379 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:19:21 compute-0 nova_compute[189508]: 2025-12-01 23:19:21.282 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:22 compute-0 systemd[1]: Starting dnf makecache...
Dec  1 23:19:22 compute-0 dnf[265347]: Metadata cache refreshed recently.
Dec  1 23:19:22 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  1 23:19:22 compute-0 systemd[1]: Finished dnf makecache.
Dec  1 23:19:24 compute-0 nova_compute[189508]: 2025-12-01 23:19:24.148 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:24 compute-0 podman[265348]: 2025-12-01 23:19:24.831423623 +0000 UTC m=+0.099145923 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:19:24 compute-0 podman[265350]: 2025-12-01 23:19:24.855333151 +0000 UTC m=+0.106961531 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350)
Dec  1 23:19:24 compute-0 podman[265351]: 2025-12-01 23:19:24.859871498 +0000 UTC m=+0.105777678 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, name=ubi9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, io.openshift.tags=base rhel9, vcs-type=git)
Dec  1 23:19:24 compute-0 podman[265349]: 2025-12-01 23:19:24.880063622 +0000 UTC m=+0.136847047 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 23:19:26 compute-0 nova_compute[189508]: 2025-12-01 23:19:26.286 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:29 compute-0 nova_compute[189508]: 2025-12-01 23:19:29.152 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:29 compute-0 podman[203693]: time="2025-12-01T23:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:19:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:19:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Dec  1 23:19:31 compute-0 nova_compute[189508]: 2025-12-01 23:19:31.290 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:31 compute-0 openstack_network_exporter[205887]: ERROR   23:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:19:31 compute-0 openstack_network_exporter[205887]: ERROR   23:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:19:31 compute-0 openstack_network_exporter[205887]: ERROR   23:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:19:31 compute-0 openstack_network_exporter[205887]: ERROR   23:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:19:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:19:31 compute-0 openstack_network_exporter[205887]: ERROR   23:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:19:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:19:34 compute-0 nova_compute[189508]: 2025-12-01 23:19:34.155 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.282 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.283 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:19:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:19:36 compute-0 nova_compute[189508]: 2025-12-01 23:19:36.294 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:39 compute-0 nova_compute[189508]: 2025-12-01 23:19:39.159 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:39 compute-0 podman[265431]: 2025-12-01 23:19:39.838433054 +0000 UTC m=+0.112614739 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 23:19:41 compute-0 nova_compute[189508]: 2025-12-01 23:19:41.296 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:41 compute-0 podman[265454]: 2025-12-01 23:19:41.825705675 +0000 UTC m=+0.094102512 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  1 23:19:41 compute-0 podman[265455]: 2025-12-01 23:19:41.848248315 +0000 UTC m=+0.110216562 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  1 23:19:44 compute-0 nova_compute[189508]: 2025-12-01 23:19:44.161 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:46 compute-0 nova_compute[189508]: 2025-12-01 23:19:46.298 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:46 compute-0 podman[265491]: 2025-12-01 23:19:46.871761032 +0000 UTC m=+0.139803079 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 23:19:46 compute-0 podman[265490]: 2025-12-01 23:19:46.87775761 +0000 UTC m=+0.150578660 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 23:19:49 compute-0 nova_compute[189508]: 2025-12-01 23:19:49.165 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:51 compute-0 nova_compute[189508]: 2025-12-01 23:19:51.301 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:54 compute-0 nova_compute[189508]: 2025-12-01 23:19:54.169 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:55 compute-0 podman[265535]: 2025-12-01 23:19:55.814794285 +0000 UTC m=+0.091275933 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:19:55 compute-0 podman[265536]: 2025-12-01 23:19:55.832009026 +0000 UTC m=+0.087523418 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:19:55 compute-0 podman[265540]: 2025-12-01 23:19:55.856087549 +0000 UTC m=+0.113991367 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Dec  1 23:19:55 compute-0 podman[265543]: 2025-12-01 23:19:55.860628726 +0000 UTC m=+0.101850378 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Dec  1 23:19:56 compute-0 nova_compute[189508]: 2025-12-01 23:19:56.304 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:59 compute-0 nova_compute[189508]: 2025-12-01 23:19:59.173 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:19:59 compute-0 podman[203693]: time="2025-12-01T23:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:19:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:19:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Dec  1 23:20:01 compute-0 nova_compute[189508]: 2025-12-01 23:20:01.308 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:01 compute-0 openstack_network_exporter[205887]: ERROR   23:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:20:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:20:01 compute-0 openstack_network_exporter[205887]: ERROR   23:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:20:01 compute-0 openstack_network_exporter[205887]: ERROR   23:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:20:01 compute-0 openstack_network_exporter[205887]: ERROR   23:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:20:01 compute-0 openstack_network_exporter[205887]: ERROR   23:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:20:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:20:03 compute-0 nova_compute[189508]: 2025-12-01 23:20:03.224 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:04 compute-0 nova_compute[189508]: 2025-12-01 23:20:04.176 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:20:04.670 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:20:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:20:04.671 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:20:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:20:04.671 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:20:06 compute-0 nova_compute[189508]: 2025-12-01 23:20:06.310 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:09 compute-0 nova_compute[189508]: 2025-12-01 23:20:09.181 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:10 compute-0 podman[265612]: 2025-12-01 23:20:10.800365427 +0000 UTC m=+0.076280703 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:20:11 compute-0 nova_compute[189508]: 2025-12-01 23:20:11.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:11 compute-0 nova_compute[189508]: 2025-12-01 23:20:11.314 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:12 compute-0 podman[265636]: 2025-12-01 23:20:12.826466415 +0000 UTC m=+0.105813569 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 23:20:12 compute-0 podman[265637]: 2025-12-01 23:20:12.863768258 +0000 UTC m=+0.138134423 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:20:13 compute-0 nova_compute[189508]: 2025-12-01 23:20:13.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:13 compute-0 nova_compute[189508]: 2025-12-01 23:20:13.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:20:14 compute-0 nova_compute[189508]: 2025-12-01 23:20:14.185 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:15 compute-0 nova_compute[189508]: 2025-12-01 23:20:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:15 compute-0 nova_compute[189508]: 2025-12-01 23:20:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.239 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.240 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.240 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.240 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.316 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.719 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.720 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5334MB free_disk=72.12316131591797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.720 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.721 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.791 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.791 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.828 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.843 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.846 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:20:16 compute-0 nova_compute[189508]: 2025-12-01 23:20:16.846 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:20:17 compute-0 podman[265674]: 2025-12-01 23:20:17.84352723 +0000 UTC m=+0.108505824 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 23:20:17 compute-0 podman[265673]: 2025-12-01 23:20:17.897887599 +0000 UTC m=+0.166878495 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  1 23:20:18 compute-0 nova_compute[189508]: 2025-12-01 23:20:18.848 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:18 compute-0 nova_compute[189508]: 2025-12-01 23:20:18.849 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:20:18 compute-0 nova_compute[189508]: 2025-12-01 23:20:18.849 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:20:18 compute-0 nova_compute[189508]: 2025-12-01 23:20:18.872 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:20:19 compute-0 nova_compute[189508]: 2025-12-01 23:20:19.188 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:21 compute-0 nova_compute[189508]: 2025-12-01 23:20:21.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:20:21 compute-0 nova_compute[189508]: 2025-12-01 23:20:21.320 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:24 compute-0 nova_compute[189508]: 2025-12-01 23:20:24.193 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:26 compute-0 nova_compute[189508]: 2025-12-01 23:20:26.324 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:26 compute-0 podman[265714]: 2025-12-01 23:20:26.806767375 +0000 UTC m=+0.082149737 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:20:26 compute-0 podman[265713]: 2025-12-01 23:20:26.831179557 +0000 UTC m=+0.113823502 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:20:26 compute-0 podman[265715]: 2025-12-01 23:20:26.843683187 +0000 UTC m=+0.116639642 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 23:20:26 compute-0 podman[265716]: 2025-12-01 23:20:26.843852282 +0000 UTC m=+0.117140776 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543)
Dec  1 23:20:29 compute-0 nova_compute[189508]: 2025-12-01 23:20:29.197 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:29 compute-0 podman[203693]: time="2025-12-01T23:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:20:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:20:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Dec  1 23:20:31 compute-0 nova_compute[189508]: 2025-12-01 23:20:31.329 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:31 compute-0 openstack_network_exporter[205887]: ERROR   23:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:20:31 compute-0 openstack_network_exporter[205887]: ERROR   23:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:20:31 compute-0 openstack_network_exporter[205887]: ERROR   23:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:20:31 compute-0 openstack_network_exporter[205887]: ERROR   23:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:20:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:20:31 compute-0 openstack_network_exporter[205887]: ERROR   23:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:20:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:20:34 compute-0 nova_compute[189508]: 2025-12-01 23:20:34.200 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:36 compute-0 nova_compute[189508]: 2025-12-01 23:20:36.333 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:39 compute-0 nova_compute[189508]: 2025-12-01 23:20:39.205 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:41 compute-0 nova_compute[189508]: 2025-12-01 23:20:41.334 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:41 compute-0 podman[265791]: 2025-12-01 23:20:41.848936377 +0000 UTC m=+0.114852321 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:20:43 compute-0 podman[265815]: 2025-12-01 23:20:43.823426562 +0000 UTC m=+0.091592591 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 23:20:43 compute-0 podman[265814]: 2025-12-01 23:20:43.833937596 +0000 UTC m=+0.097524057 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:20:44 compute-0 nova_compute[189508]: 2025-12-01 23:20:44.211 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:46 compute-0 nova_compute[189508]: 2025-12-01 23:20:46.337 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:48 compute-0 podman[265852]: 2025-12-01 23:20:48.827659188 +0000 UTC m=+0.105826809 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 23:20:48 compute-0 podman[265851]: 2025-12-01 23:20:48.836642599 +0000 UTC m=+0.120137469 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec  1 23:20:49 compute-0 nova_compute[189508]: 2025-12-01 23:20:49.213 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:51 compute-0 nova_compute[189508]: 2025-12-01 23:20:51.340 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:54 compute-0 nova_compute[189508]: 2025-12-01 23:20:54.216 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:56 compute-0 nova_compute[189508]: 2025-12-01 23:20:56.343 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:57 compute-0 podman[265896]: 2025-12-01 23:20:57.845914343 +0000 UTC m=+0.113277568 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 23:20:57 compute-0 podman[265898]: 2025-12-01 23:20:57.846953772 +0000 UTC m=+0.097459276 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:20:57 compute-0 podman[265895]: 2025-12-01 23:20:57.852655471 +0000 UTC m=+0.124829951 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:20:57 compute-0 podman[265897]: 2025-12-01 23:20:57.857020663 +0000 UTC m=+0.115962553 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350)
Dec  1 23:20:59 compute-0 nova_compute[189508]: 2025-12-01 23:20:59.220 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:20:59 compute-0 podman[203693]: time="2025-12-01T23:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:20:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:20:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Dec  1 23:21:01 compute-0 nova_compute[189508]: 2025-12-01 23:21:01.347 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:01 compute-0 openstack_network_exporter[205887]: ERROR   23:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:21:01 compute-0 openstack_network_exporter[205887]: ERROR   23:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:21:01 compute-0 openstack_network_exporter[205887]: ERROR   23:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:21:01 compute-0 openstack_network_exporter[205887]: ERROR   23:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:21:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:21:01 compute-0 openstack_network_exporter[205887]: ERROR   23:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:21:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:21:04 compute-0 nova_compute[189508]: 2025-12-01 23:21:04.196 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:04 compute-0 nova_compute[189508]: 2025-12-01 23:21:04.227 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:21:04.673 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:21:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:21:04.675 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:21:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:21:04.675 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:21:06 compute-0 nova_compute[189508]: 2025-12-01 23:21:06.349 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:09 compute-0 nova_compute[189508]: 2025-12-01 23:21:09.230 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:11 compute-0 nova_compute[189508]: 2025-12-01 23:21:11.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:11 compute-0 nova_compute[189508]: 2025-12-01 23:21:11.350 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:13 compute-0 nova_compute[189508]: 2025-12-01 23:21:13.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:13 compute-0 podman[265977]: 2025-12-01 23:21:13.342679345 +0000 UTC m=+0.077268620 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:21:14 compute-0 nova_compute[189508]: 2025-12-01 23:21:14.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:14 compute-0 nova_compute[189508]: 2025-12-01 23:21:14.198 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:21:14 compute-0 nova_compute[189508]: 2025-12-01 23:21:14.233 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:14 compute-0 podman[266001]: 2025-12-01 23:21:14.802967606 +0000 UTC m=+0.099011319 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  1 23:21:14 compute-0 podman[266000]: 2025-12-01 23:21:14.803175322 +0000 UTC m=+0.108131754 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:21:15 compute-0 nova_compute[189508]: 2025-12-01 23:21:15.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:16 compute-0 nova_compute[189508]: 2025-12-01 23:21:16.352 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.387 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.388 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.388 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.388 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.894 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.895 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5331MB free_disk=72.12316131591797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.895 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:21:17 compute-0 nova_compute[189508]: 2025-12-01 23:21:17.896 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:21:19 compute-0 nova_compute[189508]: 2025-12-01 23:21:19.237 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:21:19 compute-0 nova_compute[189508]: 2025-12-01 23:21:19.237 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:21:19 compute-0 nova_compute[189508]: 2025-12-01 23:21:19.239 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:19 compute-0 podman[266038]: 2025-12-01 23:21:19.83357316 +0000 UTC m=+0.096914340 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 23:21:19 compute-0 podman[266037]: 2025-12-01 23:21:19.873811325 +0000 UTC m=+0.145946371 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec  1 23:21:20 compute-0 nova_compute[189508]: 2025-12-01 23:21:20.016 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:21:20 compute-0 nova_compute[189508]: 2025-12-01 23:21:20.187 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:21:20 compute-0 nova_compute[189508]: 2025-12-01 23:21:20.188 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:21:20 compute-0 nova_compute[189508]: 2025-12-01 23:21:20.188 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:21:21 compute-0 nova_compute[189508]: 2025-12-01 23:21:21.188 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:21 compute-0 nova_compute[189508]: 2025-12-01 23:21:21.189 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:21:21 compute-0 nova_compute[189508]: 2025-12-01 23:21:21.191 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:21:21 compute-0 nova_compute[189508]: 2025-12-01 23:21:21.211 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:21:21 compute-0 nova_compute[189508]: 2025-12-01 23:21:21.354 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:22 compute-0 nova_compute[189508]: 2025-12-01 23:21:22.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:21:24 compute-0 nova_compute[189508]: 2025-12-01 23:21:24.243 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:25 compute-0 nova_compute[189508]: 2025-12-01 23:21:25.436 189512 DEBUG oslo_concurrency.processutils [None req-f790136b-7051-4610-a7f5-642b9eb9a5df 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 23:21:25 compute-0 nova_compute[189508]: 2025-12-01 23:21:25.478 189512 DEBUG oslo_concurrency.processutils [None req-f790136b-7051-4610-a7f5-642b9eb9a5df 3b810e864d6c4d058e539f62ad181096 af2fbf0e1b5f40c19aed69d241db7727 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 23:21:26 compute-0 nova_compute[189508]: 2025-12-01 23:21:26.357 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:28 compute-0 podman[266081]: 2025-12-01 23:21:28.809401006 +0000 UTC m=+0.081997383 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41)
Dec  1 23:21:28 compute-0 podman[266082]: 2025-12-01 23:21:28.811797363 +0000 UTC m=+0.088204846 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, config_id=edpm, io.openshift.tags=base rhel9, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec  1 23:21:28 compute-0 podman[266079]: 2025-12-01 23:21:28.830581538 +0000 UTC m=+0.110547441 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:21:28 compute-0 podman[266080]: 2025-12-01 23:21:28.845331631 +0000 UTC m=+0.115864670 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:21:29 compute-0 nova_compute[189508]: 2025-12-01 23:21:29.246 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:29 compute-0 podman[203693]: time="2025-12-01T23:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:21:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:21:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Dec  1 23:21:31 compute-0 nova_compute[189508]: 2025-12-01 23:21:31.359 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:31 compute-0 openstack_network_exporter[205887]: ERROR   23:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:21:31 compute-0 openstack_network_exporter[205887]: ERROR   23:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:21:31 compute-0 openstack_network_exporter[205887]: ERROR   23:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:21:31 compute-0 openstack_network_exporter[205887]: ERROR   23:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:21:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:21:31 compute-0 openstack_network_exporter[205887]: ERROR   23:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:21:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:21:32 compute-0 nova_compute[189508]: 2025-12-01 23:21:32.671 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:32 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:21:32.671 106662 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'e2:d3:e7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:af:4f:71:cc:04'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 23:21:32 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:21:32.673 106662 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 23:21:34 compute-0 nova_compute[189508]: 2025-12-01 23:21:34.250 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.283 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.284 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.284 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.288 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.310 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:21:35.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:21:36 compute-0 nova_compute[189508]: 2025-12-01 23:21:36.361 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:36 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:21:36.676 106662 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=345f0b4e-2d1d-4c47-8fa9-2c9a0377db1e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 23:21:39 compute-0 nova_compute[189508]: 2025-12-01 23:21:39.255 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:41 compute-0 nova_compute[189508]: 2025-12-01 23:21:41.363 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:43 compute-0 podman[266158]: 2025-12-01 23:21:43.87044793 +0000 UTC m=+0.136278821 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 23:21:44 compute-0 nova_compute[189508]: 2025-12-01 23:21:44.258 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:45 compute-0 podman[266183]: 2025-12-01 23:21:45.862894056 +0000 UTC m=+0.120066907 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  1 23:21:45 compute-0 podman[266182]: 2025-12-01 23:21:45.86554731 +0000 UTC m=+0.131344882 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  1 23:21:46 compute-0 nova_compute[189508]: 2025-12-01 23:21:46.366 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:49 compute-0 nova_compute[189508]: 2025-12-01 23:21:49.260 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:50 compute-0 podman[266222]: 2025-12-01 23:21:50.831993452 +0000 UTC m=+0.105859930 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  1 23:21:50 compute-0 podman[266221]: 2025-12-01 23:21:50.891923937 +0000 UTC m=+0.168392118 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 23:21:51 compute-0 nova_compute[189508]: 2025-12-01 23:21:51.370 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:54 compute-0 nova_compute[189508]: 2025-12-01 23:21:54.264 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:56 compute-0 nova_compute[189508]: 2025-12-01 23:21:56.372 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:59 compute-0 nova_compute[189508]: 2025-12-01 23:21:59.268 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:21:59 compute-0 podman[203693]: time="2025-12-01T23:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:21:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:21:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Dec  1 23:21:59 compute-0 podman[266263]: 2025-12-01 23:21:59.833532289 +0000 UTC m=+0.097332742 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git)
Dec  1 23:21:59 compute-0 podman[266264]: 2025-12-01 23:21:59.860360459 +0000 UTC m=+0.092137437 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30)
Dec  1 23:21:59 compute-0 podman[266261]: 2025-12-01 23:21:59.869554076 +0000 UTC m=+0.133563985 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 23:21:59 compute-0 podman[266262]: 2025-12-01 23:21:59.879059052 +0000 UTC m=+0.135023396 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:22:01 compute-0 nova_compute[189508]: 2025-12-01 23:22:01.374 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:01 compute-0 openstack_network_exporter[205887]: ERROR   23:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:22:01 compute-0 openstack_network_exporter[205887]: ERROR   23:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:22:01 compute-0 openstack_network_exporter[205887]: ERROR   23:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:22:01 compute-0 openstack_network_exporter[205887]: ERROR   23:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:22:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:22:01 compute-0 openstack_network_exporter[205887]: ERROR   23:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:22:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:22:04 compute-0 nova_compute[189508]: 2025-12-01 23:22:04.272 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:22:04.675 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:22:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:22:04.675 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:22:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:22:04.676 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:22:05 compute-0 nova_compute[189508]: 2025-12-01 23:22:05.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:06 compute-0 nova_compute[189508]: 2025-12-01 23:22:06.376 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:09 compute-0 nova_compute[189508]: 2025-12-01 23:22:09.276 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:11 compute-0 nova_compute[189508]: 2025-12-01 23:22:11.380 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:13 compute-0 nova_compute[189508]: 2025-12-01 23:22:13.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:14 compute-0 nova_compute[189508]: 2025-12-01 23:22:14.279 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:14 compute-0 podman[266341]: 2025-12-01 23:22:14.832920308 +0000 UTC m=+0.122846649 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:22:15 compute-0 nova_compute[189508]: 2025-12-01 23:22:15.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:15 compute-0 nova_compute[189508]: 2025-12-01 23:22:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:15 compute-0 nova_compute[189508]: 2025-12-01 23:22:15.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:22:16 compute-0 nova_compute[189508]: 2025-12-01 23:22:16.383 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:16 compute-0 podman[266364]: 2025-12-01 23:22:16.841633504 +0000 UTC m=+0.112826817 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec  1 23:22:16 compute-0 podman[266365]: 2025-12-01 23:22:16.850928907 +0000 UTC m=+0.118957841 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.230 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.230 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.230 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.231 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.647 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.649 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5322MB free_disk=72.11534881591797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.650 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.650 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.707 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.707 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.831 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.850 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.852 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:22:17 compute-0 nova_compute[189508]: 2025-12-01 23:22:17.853 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:22:18 compute-0 nova_compute[189508]: 2025-12-01 23:22:18.854 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:18 compute-0 nova_compute[189508]: 2025-12-01 23:22:18.855 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:22:18 compute-0 nova_compute[189508]: 2025-12-01 23:22:18.855 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:22:18 compute-0 nova_compute[189508]: 2025-12-01 23:22:18.871 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:22:18 compute-0 nova_compute[189508]: 2025-12-01 23:22:18.872 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:19 compute-0 nova_compute[189508]: 2025-12-01 23:22:19.282 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:20 compute-0 nova_compute[189508]: 2025-12-01 23:22:20.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:21 compute-0 nova_compute[189508]: 2025-12-01 23:22:21.386 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:21 compute-0 podman[266404]: 2025-12-01 23:22:21.827738917 +0000 UTC m=+0.101814855 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:22:21 compute-0 podman[266403]: 2025-12-01 23:22:21.85475335 +0000 UTC m=+0.133952172 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 23:22:24 compute-0 nova_compute[189508]: 2025-12-01 23:22:24.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:22:24 compute-0 nova_compute[189508]: 2025-12-01 23:22:24.285 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:26 compute-0 nova_compute[189508]: 2025-12-01 23:22:26.389 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:29 compute-0 nova_compute[189508]: 2025-12-01 23:22:29.289 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:29 compute-0 podman[203693]: time="2025-12-01T23:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:22:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:22:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Dec  1 23:22:30 compute-0 podman[266447]: 2025-12-01 23:22:30.863944472 +0000 UTC m=+0.122726126 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc.)
Dec  1 23:22:30 compute-0 podman[266448]: 2025-12-01 23:22:30.865521026 +0000 UTC m=+0.105323044 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 23:22:30 compute-0 podman[266445]: 2025-12-01 23:22:30.870826116 +0000 UTC m=+0.129788316 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:22:30 compute-0 podman[266446]: 2025-12-01 23:22:30.88620431 +0000 UTC m=+0.134011725 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  1 23:22:31 compute-0 nova_compute[189508]: 2025-12-01 23:22:31.391 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:31 compute-0 openstack_network_exporter[205887]: ERROR   23:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:22:31 compute-0 openstack_network_exporter[205887]: ERROR   23:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:22:31 compute-0 openstack_network_exporter[205887]: ERROR   23:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:22:31 compute-0 openstack_network_exporter[205887]: ERROR   23:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:22:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:22:31 compute-0 openstack_network_exporter[205887]: ERROR   23:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:22:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:22:34 compute-0 nova_compute[189508]: 2025-12-01 23:22:34.294 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:36 compute-0 nova_compute[189508]: 2025-12-01 23:22:36.394 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:39 compute-0 nova_compute[189508]: 2025-12-01 23:22:39.298 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:41 compute-0 nova_compute[189508]: 2025-12-01 23:22:41.398 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:44 compute-0 nova_compute[189508]: 2025-12-01 23:22:44.302 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:45 compute-0 podman[266526]: 2025-12-01 23:22:45.819895249 +0000 UTC m=+0.091016142 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:22:46 compute-0 nova_compute[189508]: 2025-12-01 23:22:46.399 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:47 compute-0 podman[266551]: 2025-12-01 23:22:47.792912426 +0000 UTC m=+0.072510519 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 23:22:47 compute-0 podman[266550]: 2025-12-01 23:22:47.797755202 +0000 UTC m=+0.069716029 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 23:22:49 compute-0 nova_compute[189508]: 2025-12-01 23:22:49.305 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:51 compute-0 nova_compute[189508]: 2025-12-01 23:22:51.402 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:52 compute-0 podman[266591]: 2025-12-01 23:22:52.80600526 +0000 UTC m=+0.090411924 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 23:22:52 compute-0 podman[266590]: 2025-12-01 23:22:52.822968308 +0000 UTC m=+0.109472481 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 23:22:54 compute-0 nova_compute[189508]: 2025-12-01 23:22:54.307 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:56 compute-0 nova_compute[189508]: 2025-12-01 23:22:56.406 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:59 compute-0 nova_compute[189508]: 2025-12-01 23:22:59.309 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:22:59 compute-0 podman[203693]: time="2025-12-01T23:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:22:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:22:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Dec  1 23:23:01 compute-0 nova_compute[189508]: 2025-12-01 23:23:01.409 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:01 compute-0 openstack_network_exporter[205887]: ERROR   23:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:23:01 compute-0 openstack_network_exporter[205887]: ERROR   23:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:23:01 compute-0 openstack_network_exporter[205887]: ERROR   23:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:23:01 compute-0 openstack_network_exporter[205887]: ERROR   23:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:23:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:23:01 compute-0 openstack_network_exporter[205887]: ERROR   23:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:23:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:23:01 compute-0 podman[266635]: 2025-12-01 23:23:01.787984802 +0000 UTC m=+0.076010347 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:23:01 compute-0 podman[266637]: 2025-12-01 23:23:01.821813767 +0000 UTC m=+0.102845555 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, architecture=x86_64, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 23:23:01 compute-0 podman[266638]: 2025-12-01 23:23:01.824142793 +0000 UTC m=+0.091357151 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, container_name=kepler, distribution-scope=public, name=ubi9, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30)
Dec  1 23:23:01 compute-0 podman[266636]: 2025-12-01 23:23:01.845253459 +0000 UTC m=+0.117164769 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  1 23:23:01 compute-0 rsyslogd[236992]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 23:23:01 compute-0 rsyslogd[236992]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 23:23:04 compute-0 nova_compute[189508]: 2025-12-01 23:23:04.312 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:23:04.676 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:23:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:23:04.677 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:23:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:23:04.677 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:23:05 compute-0 nova_compute[189508]: 2025-12-01 23:23:05.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:06 compute-0 nova_compute[189508]: 2025-12-01 23:23:06.413 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:09 compute-0 nova_compute[189508]: 2025-12-01 23:23:09.316 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:11 compute-0 nova_compute[189508]: 2025-12-01 23:23:11.414 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:13 compute-0 nova_compute[189508]: 2025-12-01 23:23:13.193 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:14 compute-0 nova_compute[189508]: 2025-12-01 23:23:14.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:14 compute-0 nova_compute[189508]: 2025-12-01 23:23:14.320 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:15 compute-0 nova_compute[189508]: 2025-12-01 23:23:15.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:16 compute-0 nova_compute[189508]: 2025-12-01 23:23:16.416 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:16 compute-0 podman[266717]: 2025-12-01 23:23:16.858454641 +0000 UTC m=+0.126395390 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 23:23:17 compute-0 nova_compute[189508]: 2025-12-01 23:23:17.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:17 compute-0 nova_compute[189508]: 2025-12-01 23:23:17.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.246 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.247 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.247 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.247 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.556 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.558 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5320MB free_disk=72.11534881591797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.559 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.559 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.628 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.629 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.666 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.687 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.690 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:23:18 compute-0 nova_compute[189508]: 2025-12-01 23:23:18.690 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:23:18 compute-0 podman[266742]: 2025-12-01 23:23:18.837717625 +0000 UTC m=+0.100740946 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  1 23:23:18 compute-0 podman[266741]: 2025-12-01 23:23:18.84532441 +0000 UTC m=+0.109485302 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:23:19 compute-0 nova_compute[189508]: 2025-12-01 23:23:19.323 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:20 compute-0 nova_compute[189508]: 2025-12-01 23:23:20.691 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:20 compute-0 nova_compute[189508]: 2025-12-01 23:23:20.692 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:23:20 compute-0 nova_compute[189508]: 2025-12-01 23:23:20.693 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:23:20 compute-0 nova_compute[189508]: 2025-12-01 23:23:20.715 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:23:21 compute-0 nova_compute[189508]: 2025-12-01 23:23:21.419 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:23 compute-0 podman[266780]: 2025-12-01 23:23:23.870180836 +0000 UTC m=+0.141071865 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 23:23:23 compute-0 podman[266779]: 2025-12-01 23:23:23.882983327 +0000 UTC m=+0.163714573 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  1 23:23:24 compute-0 nova_compute[189508]: 2025-12-01 23:23:24.052 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:24 compute-0 nova_compute[189508]: 2025-12-01 23:23:24.325 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:25 compute-0 nova_compute[189508]: 2025-12-01 23:23:25.202 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:26 compute-0 nova_compute[189508]: 2025-12-01 23:23:26.423 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:28 compute-0 nova_compute[189508]: 2025-12-01 23:23:28.019 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:29 compute-0 nova_compute[189508]: 2025-12-01 23:23:29.328 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:29 compute-0 podman[203693]: time="2025-12-01T23:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:23:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:23:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Dec  1 23:23:31 compute-0 openstack_network_exporter[205887]: ERROR   23:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:23:31 compute-0 openstack_network_exporter[205887]: ERROR   23:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:23:31 compute-0 openstack_network_exporter[205887]: ERROR   23:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:23:31 compute-0 nova_compute[189508]: 2025-12-01 23:23:31.431 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:31 compute-0 openstack_network_exporter[205887]: ERROR   23:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:23:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:23:31 compute-0 openstack_network_exporter[205887]: ERROR   23:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:23:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:23:32 compute-0 podman[266822]: 2025-12-01 23:23:32.837895388 +0000 UTC m=+0.110942414 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  1 23:23:32 compute-0 podman[266821]: 2025-12-01 23:23:32.847581121 +0000 UTC m=+0.118172577 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  1 23:23:32 compute-0 podman[266823]: 2025-12-01 23:23:32.851714878 +0000 UTC m=+0.109725819 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Dec  1 23:23:32 compute-0 podman[266820]: 2025-12-01 23:23:32.869523711 +0000 UTC m=+0.143993267 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:23:34 compute-0 nova_compute[189508]: 2025-12-01 23:23:34.332 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.284 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.285 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:23:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:23:36 compute-0 nova_compute[189508]: 2025-12-01 23:23:36.434 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:39 compute-0 nova_compute[189508]: 2025-12-01 23:23:39.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:39 compute-0 nova_compute[189508]: 2025-12-01 23:23:39.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 23:23:39 compute-0 nova_compute[189508]: 2025-12-01 23:23:39.221 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 23:23:39 compute-0 nova_compute[189508]: 2025-12-01 23:23:39.336 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:41 compute-0 nova_compute[189508]: 2025-12-01 23:23:41.437 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:44 compute-0 nova_compute[189508]: 2025-12-01 23:23:44.339 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:45 compute-0 nova_compute[189508]: 2025-12-01 23:23:45.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:46 compute-0 nova_compute[189508]: 2025-12-01 23:23:46.440 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:47 compute-0 podman[266900]: 2025-12-01 23:23:47.796625311 +0000 UTC m=+0.079891246 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:23:49 compute-0 nova_compute[189508]: 2025-12-01 23:23:49.341 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:49 compute-0 podman[266924]: 2025-12-01 23:23:49.842012493 +0000 UTC m=+0.109556355 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Dec  1 23:23:49 compute-0 podman[266925]: 2025-12-01 23:23:49.880598002 +0000 UTC m=+0.143486492 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  1 23:23:50 compute-0 nova_compute[189508]: 2025-12-01 23:23:50.309 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:23:50 compute-0 nova_compute[189508]: 2025-12-01 23:23:50.310 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 23:23:51 compute-0 nova_compute[189508]: 2025-12-01 23:23:51.443 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:54 compute-0 nova_compute[189508]: 2025-12-01 23:23:54.344 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:54 compute-0 podman[266966]: 2025-12-01 23:23:54.819081848 +0000 UTC m=+0.092316007 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 23:23:54 compute-0 podman[266965]: 2025-12-01 23:23:54.903053569 +0000 UTC m=+0.178588203 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 23:23:56 compute-0 nova_compute[189508]: 2025-12-01 23:23:56.446 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:59 compute-0 nova_compute[189508]: 2025-12-01 23:23:59.347 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:23:59 compute-0 podman[203693]: time="2025-12-01T23:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:23:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 28679 "" "Go-http-client/1.1"
Dec  1 23:23:59 compute-0 podman[203693]: time="2025-12-01T23:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:23:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:23:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Dec  1 23:24:01 compute-0 openstack_network_exporter[205887]: ERROR   23:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:24:01 compute-0 openstack_network_exporter[205887]: ERROR   23:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:24:01 compute-0 openstack_network_exporter[205887]: ERROR   23:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:24:01 compute-0 openstack_network_exporter[205887]: ERROR   23:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:24:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:24:01 compute-0 openstack_network_exporter[205887]: ERROR   23:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:24:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:24:01 compute-0 nova_compute[189508]: 2025-12-01 23:24:01.448 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:03 compute-0 podman[267007]: 2025-12-01 23:24:03.80236199 +0000 UTC m=+0.082580062 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:24:03 compute-0 podman[267008]: 2025-12-01 23:24:03.835062974 +0000 UTC m=+0.111998314 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Dec  1 23:24:03 compute-0 podman[267005]: 2025-12-01 23:24:03.839150289 +0000 UTC m=+0.120589176 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:24:03 compute-0 podman[267006]: 2025-12-01 23:24:03.845123428 +0000 UTC m=+0.117044256 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:24:04 compute-0 nova_compute[189508]: 2025-12-01 23:24:04.350 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:24:04.677 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:24:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:24:04.678 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:24:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:24:04.678 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:24:06 compute-0 nova_compute[189508]: 2025-12-01 23:24:06.450 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:07 compute-0 nova_compute[189508]: 2025-12-01 23:24:07.214 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:09 compute-0 nova_compute[189508]: 2025-12-01 23:24:09.354 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:11 compute-0 nova_compute[189508]: 2025-12-01 23:24:11.454 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:14 compute-0 nova_compute[189508]: 2025-12-01 23:24:14.357 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:15 compute-0 nova_compute[189508]: 2025-12-01 23:24:15.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:16 compute-0 nova_compute[189508]: 2025-12-01 23:24:16.458 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:17 compute-0 nova_compute[189508]: 2025-12-01 23:24:17.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:18 compute-0 nova_compute[189508]: 2025-12-01 23:24:18.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:18 compute-0 nova_compute[189508]: 2025-12-01 23:24:18.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:18 compute-0 nova_compute[189508]: 2025-12-01 23:24:18.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:24:18 compute-0 podman[267082]: 2025-12-01 23:24:18.800973812 +0000 UTC m=+0.071057137 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.233 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.233 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.234 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.234 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.360 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.618 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.619 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5338MB free_disk=72.11608505249023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.619 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.620 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.695 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.695 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.722 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing inventories for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.743 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating ProviderTree inventory for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.743 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Updating inventory in ProviderTree for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.764 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing aggregate associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.791 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Refreshing trait associations for resource provider 4ec36104-0fe8-4c15-929c-861f303bb3ec, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.820 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.835 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.838 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:24:19 compute-0 nova_compute[189508]: 2025-12-01 23:24:19.838 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:24:20 compute-0 podman[267107]: 2025-12-01 23:24:20.829879197 +0000 UTC m=+0.093192962 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Dec  1 23:24:20 compute-0 nova_compute[189508]: 2025-12-01 23:24:20.839 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:20 compute-0 podman[267108]: 2025-12-01 23:24:20.848444261 +0000 UTC m=+0.109023339 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 23:24:21 compute-0 nova_compute[189508]: 2025-12-01 23:24:21.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:21 compute-0 nova_compute[189508]: 2025-12-01 23:24:21.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:24:21 compute-0 nova_compute[189508]: 2025-12-01 23:24:21.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:24:21 compute-0 nova_compute[189508]: 2025-12-01 23:24:21.215 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:24:21 compute-0 nova_compute[189508]: 2025-12-01 23:24:21.460 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:24 compute-0 nova_compute[189508]: 2025-12-01 23:24:24.364 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:25 compute-0 podman[267145]: 2025-12-01 23:24:25.859788626 +0000 UTC m=+0.131741511 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 23:24:25 compute-0 podman[267144]: 2025-12-01 23:24:25.936830281 +0000 UTC m=+0.213598662 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 23:24:26 compute-0 nova_compute[189508]: 2025-12-01 23:24:26.462 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:27 compute-0 nova_compute[189508]: 2025-12-01 23:24:27.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:24:29 compute-0 nova_compute[189508]: 2025-12-01 23:24:29.367 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:29 compute-0 podman[203693]: time="2025-12-01T23:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:24:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:24:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Dec  1 23:24:31 compute-0 openstack_network_exporter[205887]: ERROR   23:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:24:31 compute-0 openstack_network_exporter[205887]: ERROR   23:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:24:31 compute-0 openstack_network_exporter[205887]: ERROR   23:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:24:31 compute-0 openstack_network_exporter[205887]: ERROR   23:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:24:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:24:31 compute-0 openstack_network_exporter[205887]: ERROR   23:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:24:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:24:31 compute-0 nova_compute[189508]: 2025-12-01 23:24:31.464 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:34 compute-0 nova_compute[189508]: 2025-12-01 23:24:34.371 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:34 compute-0 podman[267184]: 2025-12-01 23:24:34.81844965 +0000 UTC m=+0.087629935 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 23:24:34 compute-0 podman[267183]: 2025-12-01 23:24:34.831165579 +0000 UTC m=+0.093922253 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3)
Dec  1 23:24:34 compute-0 podman[267185]: 2025-12-01 23:24:34.837037105 +0000 UTC m=+0.103298048 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, name=ubi9, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:24:34 compute-0 podman[267182]: 2025-12-01 23:24:34.85599567 +0000 UTC m=+0.125085873 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:24:36 compute-0 nova_compute[189508]: 2025-12-01 23:24:36.468 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:39 compute-0 nova_compute[189508]: 2025-12-01 23:24:39.375 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:41 compute-0 nova_compute[189508]: 2025-12-01 23:24:41.471 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:44 compute-0 nova_compute[189508]: 2025-12-01 23:24:44.381 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:46 compute-0 nova_compute[189508]: 2025-12-01 23:24:46.475 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:49 compute-0 nova_compute[189508]: 2025-12-01 23:24:49.385 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:49 compute-0 podman[267260]: 2025-12-01 23:24:49.851987119 +0000 UTC m=+0.116617614 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:24:51 compute-0 nova_compute[189508]: 2025-12-01 23:24:51.478 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:51 compute-0 podman[267284]: 2025-12-01 23:24:51.836788639 +0000 UTC m=+0.115300888 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 23:24:51 compute-0 podman[267285]: 2025-12-01 23:24:51.861053594 +0000 UTC m=+0.123112738 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:24:54 compute-0 nova_compute[189508]: 2025-12-01 23:24:54.389 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:56 compute-0 nova_compute[189508]: 2025-12-01 23:24:56.482 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:56 compute-0 podman[267322]: 2025-12-01 23:24:56.884791127 +0000 UTC m=+0.140660633 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec  1 23:24:56 compute-0 podman[267321]: 2025-12-01 23:24:56.909809053 +0000 UTC m=+0.181667410 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 23:24:59 compute-0 nova_compute[189508]: 2025-12-01 23:24:59.393 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:24:59 compute-0 podman[203693]: time="2025-12-01T23:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:24:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:24:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Dec  1 23:25:01 compute-0 openstack_network_exporter[205887]: ERROR   23:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:25:01 compute-0 openstack_network_exporter[205887]: ERROR   23:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:25:01 compute-0 openstack_network_exporter[205887]: ERROR   23:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:25:01 compute-0 openstack_network_exporter[205887]: ERROR   23:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:25:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:25:01 compute-0 openstack_network_exporter[205887]: ERROR   23:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:25:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:25:01 compute-0 nova_compute[189508]: 2025-12-01 23:25:01.485 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:04 compute-0 nova_compute[189508]: 2025-12-01 23:25:04.397 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:25:04.679 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:25:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:25:04.680 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:25:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:25:04.680 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:25:05 compute-0 podman[267368]: 2025-12-01 23:25:05.842387771 +0000 UTC m=+0.101693462 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 23:25:05 compute-0 podman[267370]: 2025-12-01 23:25:05.852935099 +0000 UTC m=+0.095701333 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Dec  1 23:25:05 compute-0 podman[267376]: 2025-12-01 23:25:05.866996696 +0000 UTC m=+0.102912907 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, container_name=kepler, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public)
Dec  1 23:25:05 compute-0 podman[267369]: 2025-12-01 23:25:05.887522356 +0000 UTC m=+0.151121388 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:25:06 compute-0 nova_compute[189508]: 2025-12-01 23:25:06.489 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:07 compute-0 nova_compute[189508]: 2025-12-01 23:25:07.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:09 compute-0 nova_compute[189508]: 2025-12-01 23:25:09.402 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:11 compute-0 nova_compute[189508]: 2025-12-01 23:25:11.491 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:14 compute-0 nova_compute[189508]: 2025-12-01 23:25:14.194 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:14 compute-0 nova_compute[189508]: 2025-12-01 23:25:14.405 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:15 compute-0 nova_compute[189508]: 2025-12-01 23:25:15.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:16 compute-0 nova_compute[189508]: 2025-12-01 23:25:16.493 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:18 compute-0 nova_compute[189508]: 2025-12-01 23:25:18.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:19 compute-0 nova_compute[189508]: 2025-12-01 23:25:19.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:19 compute-0 nova_compute[189508]: 2025-12-01 23:25:19.408 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.293 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.294 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.294 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.295 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.743 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.744 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5328MB free_disk=72.11608505249023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.744 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.745 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:25:20 compute-0 podman[267448]: 2025-12-01 23:25:20.822968783 +0000 UTC m=+0.102101204 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.844 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.845 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.885 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.934 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.937 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:25:20 compute-0 nova_compute[189508]: 2025-12-01 23:25:20.938 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:25:21 compute-0 nova_compute[189508]: 2025-12-01 23:25:21.496 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:22 compute-0 podman[267473]: 2025-12-01 23:25:22.82109388 +0000 UTC m=+0.083313175 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  1 23:25:22 compute-0 podman[267472]: 2025-12-01 23:25:22.851534799 +0000 UTC m=+0.117791808 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec  1 23:25:22 compute-0 nova_compute[189508]: 2025-12-01 23:25:22.938 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:22 compute-0 nova_compute[189508]: 2025-12-01 23:25:22.939 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:25:22 compute-0 nova_compute[189508]: 2025-12-01 23:25:22.939 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:25:22 compute-0 nova_compute[189508]: 2025-12-01 23:25:22.961 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:25:22 compute-0 nova_compute[189508]: 2025-12-01 23:25:22.961 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:24 compute-0 nova_compute[189508]: 2025-12-01 23:25:24.413 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:26 compute-0 nova_compute[189508]: 2025-12-01 23:25:26.499 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:27 compute-0 nova_compute[189508]: 2025-12-01 23:25:27.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:25:27 compute-0 podman[267511]: 2025-12-01 23:25:27.814563218 +0000 UTC m=+0.094599394 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 23:25:27 compute-0 podman[267510]: 2025-12-01 23:25:27.883066792 +0000 UTC m=+0.155276546 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  1 23:25:29 compute-0 nova_compute[189508]: 2025-12-01 23:25:29.416 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:29 compute-0 podman[203693]: time="2025-12-01T23:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:25:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:25:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Dec  1 23:25:31 compute-0 openstack_network_exporter[205887]: ERROR   23:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:25:31 compute-0 openstack_network_exporter[205887]: ERROR   23:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:25:31 compute-0 openstack_network_exporter[205887]: ERROR   23:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:25:31 compute-0 openstack_network_exporter[205887]: ERROR   23:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:25:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:25:31 compute-0 openstack_network_exporter[205887]: ERROR   23:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:25:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:25:31 compute-0 nova_compute[189508]: 2025-12-01 23:25:31.502 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:34 compute-0 nova_compute[189508]: 2025-12-01 23:25:34.419 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.285 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.285 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc8c1f7bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c20080e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c30c4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c4696450>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c2008440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7be60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc8c1f7b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc8c1b662a0>] with cache [{}], pollster history [{'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc8c20080b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc8c2008140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc8c3222000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc8c1f7b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc8c1f7b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc8c4e55a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc8c1f7b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc8c1f7b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc8c1f7b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc8c4cf9040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc8c1f79820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc8c2008410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc8c1f7b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc8c1f7b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc8c1f7ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc8c1f7b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc8c1f7bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc8c1f7b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc8c1f7bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc8c1f7bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc8c1f7be30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc8c1f7bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc8c1f7b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc8c1f7bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc8c1f7b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc8c311bb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:35 compute-0 ceilometer_agent_compute[200237]: 2025-12-01 23:25:35.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 23:25:36 compute-0 nova_compute[189508]: 2025-12-01 23:25:36.506 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:36 compute-0 podman[267554]: 2025-12-01 23:25:36.785974412 +0000 UTC m=+0.068782953 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:25:36 compute-0 podman[267556]: 2025-12-01 23:25:36.805420351 +0000 UTC m=+0.079364512 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, io.openshift.expose-services=)
Dec  1 23:25:36 compute-0 podman[267555]: 2025-12-01 23:25:36.821205757 +0000 UTC m=+0.094433268 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 23:25:36 compute-0 podman[267557]: 2025-12-01 23:25:36.855918857 +0000 UTC m=+0.128863250 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, architecture=x86_64)
Dec  1 23:25:39 compute-0 nova_compute[189508]: 2025-12-01 23:25:39.422 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:41 compute-0 nova_compute[189508]: 2025-12-01 23:25:41.507 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:44 compute-0 nova_compute[189508]: 2025-12-01 23:25:44.425 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:46 compute-0 nova_compute[189508]: 2025-12-01 23:25:46.510 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:49 compute-0 nova_compute[189508]: 2025-12-01 23:25:49.433 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:51 compute-0 nova_compute[189508]: 2025-12-01 23:25:51.513 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:51 compute-0 podman[267632]: 2025-12-01 23:25:51.845694861 +0000 UTC m=+0.115711529 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 23:25:53 compute-0 podman[267655]: 2025-12-01 23:25:53.831839359 +0000 UTC m=+0.110816550 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  1 23:25:53 compute-0 podman[267656]: 2025-12-01 23:25:53.838385114 +0000 UTC m=+0.098484242 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 23:25:54 compute-0 nova_compute[189508]: 2025-12-01 23:25:54.438 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:56 compute-0 nova_compute[189508]: 2025-12-01 23:25:56.517 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:58 compute-0 podman[267696]: 2025-12-01 23:25:58.808917515 +0000 UTC m=+0.088288364 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 23:25:58 compute-0 podman[267695]: 2025-12-01 23:25:58.834730864 +0000 UTC m=+0.114060231 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 23:25:59 compute-0 nova_compute[189508]: 2025-12-01 23:25:59.441 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:25:59 compute-0 podman[203693]: time="2025-12-01T23:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:25:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:25:59 compute-0 podman[203693]: @ - - [01/Dec/2025:23:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Dec  1 23:26:01 compute-0 openstack_network_exporter[205887]: ERROR   23:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:26:01 compute-0 openstack_network_exporter[205887]: ERROR   23:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:26:01 compute-0 openstack_network_exporter[205887]: ERROR   23:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:26:01 compute-0 openstack_network_exporter[205887]: ERROR   23:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:26:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:26:01 compute-0 openstack_network_exporter[205887]: ERROR   23:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:26:01 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:26:01 compute-0 nova_compute[189508]: 2025-12-01 23:26:01.520 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:04 compute-0 nova_compute[189508]: 2025-12-01 23:26:04.445 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:26:04.680 106662 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:26:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:26:04.680 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:26:04 compute-0 ovn_metadata_agent[106657]: 2025-12-01 23:26:04.680 106662 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:26:06 compute-0 nova_compute[189508]: 2025-12-01 23:26:06.524 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:07 compute-0 podman[267742]: 2025-12-01 23:26:07.809890214 +0000 UTC m=+0.089147578 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 23:26:07 compute-0 podman[267744]: 2025-12-01 23:26:07.825123404 +0000 UTC m=+0.094933892 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 23:26:07 compute-0 podman[267741]: 2025-12-01 23:26:07.82922935 +0000 UTC m=+0.112836307 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:26:07 compute-0 podman[267743]: 2025-12-01 23:26:07.857239171 +0000 UTC m=+0.130226438 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 23:26:08 compute-0 nova_compute[189508]: 2025-12-01 23:26:08.195 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:09 compute-0 nova_compute[189508]: 2025-12-01 23:26:09.449 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:11 compute-0 nova_compute[189508]: 2025-12-01 23:26:11.524 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:14 compute-0 nova_compute[189508]: 2025-12-01 23:26:14.450 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:16 compute-0 nova_compute[189508]: 2025-12-01 23:26:16.528 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:17 compute-0 nova_compute[189508]: 2025-12-01 23:26:17.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:19 compute-0 nova_compute[189508]: 2025-12-01 23:26:19.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:19 compute-0 nova_compute[189508]: 2025-12-01 23:26:19.453 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:20 compute-0 nova_compute[189508]: 2025-12-01 23:26:20.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.199 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.199 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.234 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.235 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.236 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.530 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.686 189512 WARNING nova.virt.libvirt.driver [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.688 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5324MB free_disk=72.11608505249023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.688 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.688 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.899 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.899 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.926 189512 DEBUG nova.compute.provider_tree [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed in ProviderTree for provider: 4ec36104-0fe8-4c15-929c-861f303bb3ec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.946 189512 DEBUG nova.scheduler.client.report [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Inventory has not changed for provider 4ec36104-0fe8-4c15-929c-861f303bb3ec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.948 189512 DEBUG nova.compute.resource_tracker [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 23:26:21 compute-0 nova_compute[189508]: 2025-12-01 23:26:21.949 189512 DEBUG oslo_concurrency.lockutils [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 23:26:22 compute-0 podman[267822]: 2025-12-01 23:26:22.853889158 +0000 UTC m=+0.119780923 container health_status 8fb1ceb19772c617d2db4b8e41b6c0742126a84224667b14e004d92153252df1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 23:26:22 compute-0 nova_compute[189508]: 2025-12-01 23:26:22.949 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:23 compute-0 nova_compute[189508]: 2025-12-01 23:26:23.200 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:23 compute-0 nova_compute[189508]: 2025-12-01 23:26:23.200 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 23:26:23 compute-0 nova_compute[189508]: 2025-12-01 23:26:23.201 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 23:26:23 compute-0 nova_compute[189508]: 2025-12-01 23:26:23.224 189512 DEBUG nova.compute.manager [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 23:26:24 compute-0 nova_compute[189508]: 2025-12-01 23:26:24.456 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:24 compute-0 podman[267845]: 2025-12-01 23:26:24.843622529 +0000 UTC m=+0.122438078 container health_status f192dad1d7d3945ce21d0255b53270c0a1843a16333bda215807f7e5ce8babbe (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  1 23:26:24 compute-0 podman[267844]: 2025-12-01 23:26:24.866379561 +0000 UTC m=+0.142143344 container health_status a8a6883dc3bf89e36b2173b72389e6f0d41aeece1e7ae5d2ed536f854dc8d3a8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 23:26:26 compute-0 nova_compute[189508]: 2025-12-01 23:26:26.533 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:29 compute-0 nova_compute[189508]: 2025-12-01 23:26:29.198 189512 DEBUG oslo_service.periodic_task [None req-a5910433-6909-4b96-a36a-18bf87125aef - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 23:26:29 compute-0 nova_compute[189508]: 2025-12-01 23:26:29.459 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:29 compute-0 podman[203693]: time="2025-12-01T23:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 23:26:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  1 23:26:29 compute-0 podman[203693]: @ - - [01/Dec/2025:23:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Dec  1 23:26:29 compute-0 podman[267884]: 2025-12-01 23:26:29.841214046 +0000 UTC m=+0.104450890 container health_status ae70584dc470cca061b3450ec32795a52c203243cc8670e86e52674594f2a9e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 23:26:29 compute-0 podman[267883]: 2025-12-01 23:26:29.901701954 +0000 UTC m=+0.171793961 container health_status 6222da8ad8b6cefd324afe935c4c12b1be14228af42b9023fd7cc3060580b367 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 23:26:31 compute-0 openstack_network_exporter[205887]: ERROR   23:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 23:26:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:26:31 compute-0 openstack_network_exporter[205887]: ERROR   23:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:26:31 compute-0 openstack_network_exporter[205887]: ERROR   23:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 23:26:31 compute-0 openstack_network_exporter[205887]: ERROR   23:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 23:26:31 compute-0 openstack_network_exporter[205887]: 
Dec  1 23:26:31 compute-0 openstack_network_exporter[205887]: ERROR   23:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 23:26:31 compute-0 nova_compute[189508]: 2025-12-01 23:26:31.536 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:34 compute-0 nova_compute[189508]: 2025-12-01 23:26:34.462 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:36 compute-0 nova_compute[189508]: 2025-12-01 23:26:36.539 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:38 compute-0 podman[267927]: 2025-12-01 23:26:38.797660829 +0000 UTC m=+0.075274857 container health_status 12b9f6a6dba01895cb7ffab6b307b7bb781456c3d6d90d48e4458f06dcfdec5d (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 23:26:38 compute-0 podman[267929]: 2025-12-01 23:26:38.805239933 +0000 UTC m=+0.074238677 container health_status 9eeeb459b098cd8f468c6f1b198061b863a4f8ea18881957b985099a6b4bce74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64)
Dec  1 23:26:38 compute-0 podman[267930]: 2025-12-01 23:26:38.822697206 +0000 UTC m=+0.092527044 container health_status c6436dd0e6605273da025c13648ab33f4809143a03d70b716073e550e822b5d2 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, container_name=kepler)
Dec  1 23:26:38 compute-0 podman[267928]: 2025-12-01 23:26:38.842884946 +0000 UTC m=+0.119781883 container health_status 1c63b98f2bc83b18739654362115cc65c9c8d3e34506cb3280a3344dde682841 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 23:26:39 compute-0 systemd-logind[788]: New session 34 of user zuul.
Dec  1 23:26:39 compute-0 systemd[1]: Started Session 34 of User zuul.
Dec  1 23:26:39 compute-0 nova_compute[189508]: 2025-12-01 23:26:39.464 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:41 compute-0 nova_compute[189508]: 2025-12-01 23:26:41.544 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:44 compute-0 nova_compute[189508]: 2025-12-01 23:26:44.467 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:44 compute-0 ovs-vsctl[268175]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  1 23:26:45 compute-0 virtqemud[189130]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  1 23:26:45 compute-0 virtqemud[189130]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  1 23:26:46 compute-0 virtqemud[189130]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 23:26:46 compute-0 nova_compute[189508]: 2025-12-01 23:26:46.545 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:49 compute-0 nova_compute[189508]: 2025-12-01 23:26:49.469 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 23:26:49 compute-0 systemd[1]: Starting Hostname Service...
Dec  1 23:26:49 compute-0 systemd[1]: Started Hostname Service.
Dec  1 23:26:51 compute-0 nova_compute[189508]: 2025-12-01 23:26:51.546 189512 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
